11 datasets found
  1. m

    Neural Networks in Friction Factor Analysis of Smooth Pipe Bends

    • data.mendeley.com
    Updated Dec 19, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Adarsh Vasa (2022). Neural Networks in Friction Factor Analysis of Smooth Pipe Bends [Dataset]. http://doi.org/10.17632/sjvbwh5ckg.1
    Explore at:
    Dataset updated
    Dec 19, 2022
    Authors
    Adarsh Vasa
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    PROGRAM SUMMARY No. of lines in distributed program, including test data, etc.: 481 No. of bytes in distributed program, including test data, etc.: 14540.8 Distribution format: .py, .csv Programming language: Python Computer: Any workstation or laptop computer running TensorFlow, Google Colab, Anaconda, Jupyter, pandas, NumPy, Microsoft Azure and Alteryx. Operating system: Windows and Mac OS, Linux.

    Nature of problem: Navier-Stokes equations are solved numerically in ANSYS Fluent using Reynolds stress model for turbulence. The simulated values of friction factor are validated with theoretical and experimental data obtained from literature. Artificial neural networks are then used for a prediction-based augmentation of friction factor. The capabilities of the neural networks is discussed, in regard to computational cost and domain limitations.

    Solution method: The simulation data is obtained through Reynolds stress modelling of fluid flow through pipe. This data is augmented using the artificial neural network model that predicts within and without data domain.

    Restrictions: The code used in this research is limited to smooth pipe bends, in which friction factor is analysed using a steady state incompressible fluid flow.

    Runtime: The artificial neural network produces results within a span of 20 seconds for three-dimensional geometry, using the allocated free computational resources of Google Colaboratory cloud-based computing system.

  2. Code and dataset for publication "Laser Wakefield Accelerator modelling with...

    • zenodo.org
    zip
    Updated Jan 8, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    M. J. V. Streeter; M. J. V. Streeter (2023). Code and dataset for publication "Laser Wakefield Accelerator modelling with Variational Neural Networks" [Dataset]. http://doi.org/10.5281/zenodo.7510352
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 8, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    M. J. V. Streeter; M. J. V. Streeter
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Data and code for reproducing figures in published work.

    High Power Laser Science and Engineering

    https://doi.org/10.1017/hpl.2022.47

    Code used various python packages including tensorflow.

    Conda environment was created with (on 6th Jan 2022)
    conda create --name tf tensorflow notebook tensorflow-probability pandas tqdm scikit-learn matplotlib seaborn protobuf opencv scipy scikit-image scikit-optimize Pillow PyAbel libclang flatbuffers gast --channel conda-forge

  3. Bird Species Image Classification Dataset

    • kaggle.com
    Updated Jun 11, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Evil Spirit05 (2025). Bird Species Image Classification Dataset [Dataset]. https://www.kaggle.com/datasets/evilspirit05/birds-species-prediction
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jun 11, 2025
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Evil Spirit05
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    This dataset contains high-quality images of six distinct bird species, curated for use in image classification, computer vision, and biodiversity research tasks. Each bird species included in this dataset is well-represented, making it ideal for training and evaluating deep learning models.

    LabelSpecies NameImage Count
    1American Goldfinch143
    2Emperor Penguin139
    3Downy Woodpecker137
    4Flamingo132
    5Carmine Bee-eater131
    6Barn Owl129

    📂 Dataset Highlights: * Total Images: 811 * Classes: 6 unique bird species * Balanced Labels: Nearly equal distribution across classes * Use Cases: Image classification, model benchmarking, transfer learning, educational projects, biodiversity analysis

    🧠 Potential Applications: * Training deep learning models like CNNs for bird species recognition * Fine-tuning pre-trained models using a small and balanced dataset * Educational projects in ornithology and computer vision * Biodiversity and wildlife conservation tech solutions

    🛠️ Suggested Tools: * Python (Pandas, NumPy, Matplotlib) * TensorFlow / PyTorch for model development * OpenCV for image preprocessing * Streamlit for creating interactive demos

  4. Board Game Ratings by Country

    • kaggle.com
    zip
    Updated Dec 20, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Devastator (2023). Board Game Ratings by Country [Dataset]. https://www.kaggle.com/datasets/thedevastator/board-game-ratings-by-country/code
    Explore at:
    zip(115550553 bytes)Available download formats
    Dataset updated
    Dec 20, 2023
    Authors
    The Devastator
    Description

    Board Game Ratings by Country

    Global User Ratings of Board Games

    By Michael Petrey [source]

    About this dataset

    This dataset, originating from the beloved board game community site BoardGameGeek and subsequently expanded by Jesse van Elteren to create a more detailed canvas of data, is now further enriched here with additional geographic location information. Broadening the original framework beyond gaming metrics alone enables researchers and enthusiasts to explore international trends, regional preferences, and cultural influences that may permeate the rich tapestry of games.

    1. userID: This indicates a unique identifier assigned to each user within the BoardGameGeek online community. It helps track individual users' behavior as they rate various games.

    2. gameID: This specifies a unique identifier aligned with each board game listed on their platform. It serves as an index that allows us to distinguish between different games receiving ratings.

    3. rating: Reflects the score out of 10 given by a user for a specific game in their review posted on BoardGameGeek's platform allowing us to understand just how well-received or popular any particular game is among its audience.

    4. country: A newly added field denoting which country the respective reviewer resides in - be it USA, UK, Australia or elsewhere - enriches this dataset with crucial geographic detail initially absent can now enable examinations of demographic patterns and trends based around location.

    By adding this layer of geolocational context for users who contribute reviews and rates games on Boardgamegeek.com (BGG), this dataset opens up new avenues exploring not only which games are rated high but also where these ratings coming from globally; creating opportunities for deeper study into localised impacts within global gaming communities.

    This versatile compendium forms an essential database for those interested in analyzing trends in board gaming as it provides both comprehensive detail-oriented insights about individual games based on user approval ratings while simultaneously enabling larger-scale contemplation regarding how localized norms potentially influence review scores across diverse geographical regions worldwide relating back directly towards central theme - an appreciation of board games

    How to use the dataset

    • Understand the Dataset: The first step is to understand what data is there and what it represents. This dataset includes board game ratings from users along with their country information. Each row represents a unique rating by a user for a particular game from a specific country.

    • Load the Data: Using Python libraries like pandas, you can conveniently load this dataset for computational analysis. You would use pd.read_csv('file_path') function.

    • Data Exploration: Start digging into this data by checking its distribution, outliers and missing values etc using plots like histograms or boxplots as well as statistical methods . These all tools are present in seaborn, matplotlib and pandas libraries in python.

    • Statistical Analysis: You could then compute average ratings per country or rank countries according to their mean rating, thus comparing how different countries score games on an average level.

    • Identify Top Rated Games: Identify the board games with highest overall user ratings regardless of geography providing insights about global preferences about certain boardgames that would serve valuable for manufacturers and retailers globally alike.

    • Countrywise Phenomenon: Analyze game popularity within specific countries - are some games more popular in certain places? Does popularity correlate strongly with high ratings?

    7a.**Machine Learning Modelling:** Based on user reviews make machine learning models for predicting which type of games will be liked/disliked by people belonging to different geographical locations

    7b.Or ML models can predict future trends based on historical data or provide interesting pattern recognition capabilities that could result in potential business strategies .

    8b.Make recommendations based on users previous reviews also termed as collaborative filtering

    8a.Or use popular recommendation algorithms such as cosine similarity measures to recommend new games they might enjoy.

    • While using any form of modelling don't forget to split the dataset into training and testing set before developing and validating your model.

    • cuda, tensorflow or pytorch libraries can be used for applying deep learning techniques.

    In sum, this data se...

  5. Z

    API Database of Python frameworks & Labeled Issues

    • data.niaid.nih.gov
    Updated Aug 4, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anonymous Authors (2021). API Database of Python frameworks & Labeled Issues [Dataset]. https://data.niaid.nih.gov/resources?id=ZENODO_2756358
    Explore at:
    Dataset updated
    Aug 4, 2021
    Authors
    Anonymous Authors
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    PyLibAPIs.7z : contains public API data (mongodb dump) for these frameworks:

    TensorFlow

    Keras

    scikit-learn

    Pandas

    Flask

    Django

    Label.xlsx: cintains issues and their labels

  6. h

    Supporting data for “Deep learning methods and applications to digital...

    • datahub.hku.hk
    Updated Oct 3, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shichao Ma (2024). Supporting data for “Deep learning methods and applications to digital health” [Dataset]. http://doi.org/10.25442/hku.27060427.v1
    Explore at:
    Dataset updated
    Oct 3, 2024
    Dataset provided by
    HKU Data Repository
    Authors
    Shichao Ma
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    This repository contains three folders which contain either the data or the source code for the three main chapters (Chapter 3, 4, and 5) in the thesis. Those folders are 1) Dataset (Chapter 3): This file contains phonocardigrams signals (/PhysioNet2016) used in Chapter 3 and 4 as the upstream pretraining data. This is a public dataset. /SourceCode includes all the statistical analysis and visualization scripts for Chapter 3. Yaseen_dataset and PASCAL contain phonocardigrams signals with pathological features, Yaseen_dataset serves as the downstream finetuning dataset in Chapter 3, while PASCAL datasets serves as the secondary testing dataset in Chapter 3. 2) Dataset (Chapter 4): /SourceCode includes all the statistical analysis and visualization scripts for Chapter 4. 3) Dataset (Chapter 5): PAD-UFES-20_processed contains dermatology images processed from the PAD-UFES-20 dataset, which is a public dataset. The dataset is used in the Chapter 5. And /SourceCode includes all the statistical analysis and visualization scripts for Chapter 5.Several packges are mendatory to run the source code, including:Python > 3.6 (3.11 preferred), TensorFlow > 2.16, Keras > 3.3, NumPy > 1.26, Pandas > 2.2, SciPy > 1.13

  7. Data product and code for: Spatiotemporal Distribution of Dissolved...

    • zenodo.org
    nc, zip
    Updated Dec 30, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tobias Ehmen; Tobias Ehmen; Neill Mackay; Neill Mackay; Andrew Watson; Andrew Watson (2024). Data product and code for: Spatiotemporal Distribution of Dissolved Inorganic Carbon in the Global Ocean Interior - Reconstructed through Machine Learning [Dataset]. http://doi.org/10.5281/zenodo.14575969
    Explore at:
    nc, zipAvailable download formats
    Dataset updated
    Dec 30, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Tobias Ehmen; Tobias Ehmen; Neill Mackay; Neill Mackay; Andrew Watson; Andrew Watson
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Data product and code for: Ehmen et al.: Spatiotemporal Distribution of Dissolved Inorganic Carbon in the Global Ocean Interior - Reconstructed through Machine Learning

    Note that due to the data limit on Zenodo only a compressed version of the ensemble mean is uploaded here (compressed_DIC_mean_15fold_ensemble_aveRMSE7.46_0.15TTcasts_1990-2023.nc). Individual ensemble members can be generated through the weight and scaler files found in weights_and_scalers_DIC_paper.zip and the code "ResNet_DIC_loading_past_prediction_2024-12-28.py" (see description below).

    EN4_thickness_GEBCO.nc contains the scaling factors used in "plot_carbon_inventory_for_ensemble_2024-01-27.py" (see description below).
    DIC_paper_code_Ehmen_et_al.zip contains the python code used to generate products and figures.

    Prerequisites: Python running the modules tensorflow, shap, xarray, pandas and scipy. Plots additionally use matplotlib, cartopy, seaborn, statsmodels, gsw and cmocean.

    The main scripts used to generate reconstructions are “ResNet_DIC_2024-12-28.py” (for new training runs) and “ResNet_DIC_loading_past_prediction_2024-12-28.py” (for already trained past weight and scaler files). Usage:

    • Assign the correct directories in the function “create_directories” according to your own system. You won’t need the same if-statements for individual platforms and computers
    • Download the most recent version of GLODAP and store it in the directory chosen in “create_directories”. Check if the filename is the same as used in “import_GLODAP_dataset”. Unless the GLODAP creators change their naming system of the columns, newer versions can be used instead of GLODAPv2.2023
    • Download the HOT, BATS and Drake Passage time series and ensure the filenames are the same as in “import_time_series_data”. Store them in the time series directory chosen in “create_directories”. This is optional and the time series prediction can be commented out.
    • Download EN4 analysis files for the years you want and store them in the EN4 analysis directory chosen in “create_directories”. For the reconstruction to be created from all available EN4 analysis files, the variable prediction_to_file needs to be True, otherwise only a single time slice will be predicted (but not saved) for testing and plotting.
    • If you want to generate reconstructions pre-trained models, make sure the “scalers” and “weight_files” subdirectories are correctly stored in the “training” directory defined in “create_directories”.
    • Store the synthetic dataset of ECCO-Darwin values at GLODAP locations in the directory chosen in “create_directories”. For predicting the full model fields ECCO-Darwin needs to be in a csv-style format (for use in pandas dataframes), i.e. the multi-dimensional data needs to be flattened. Store these altered csv-style files in the directory chosen in “create_directories”

    Once a reconstruction has been generated the following scripts found in the subdirectory “working_with_finished_reconstructions” can be used:

    • ensemble_create_mean_and_std_2023-11-27.py: this creates an ensemble mean from ideally 15 ensemble members (number can be adjusted, if less reconstruction files are found than this number it is adjusted automatically). For DIC it also calculates the uncertainty following the method by Keppler et al. 2023.
    • plot_carbon_inventory_for_ensemble_2024-01-27.py: plots the carbon inventory change for DIC from both ensemble mean and the individual ensemble members. The most important settings are the default. Other options include plotting the seasonal change, others are not supported in this version as they require additional files not supplied here.
    • depth_slices_and_zonal_means_full_prediction_2024-07-05.py: creates several world maps for individual depths and zonal means for the Indian, Atlantic and Pacific Ocean.
    • Hovmoeller_plots_from_predictions_2024-05-02.py: generates simplified HovmĂśller plots from individual reconstructions.
    • DIC_comparison_with_other_products_2024-06-27: interpolates and compares this product with climatologies and products from other studies. These need to be downloaded first. Products can be excluded if they are removed from the list “files_to_compare”.
  8. Brain Tumor CSV

    • kaggle.com
    zip
    Updated Oct 30, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Akash Nath (2024). Brain Tumor CSV [Dataset]. https://www.kaggle.com/datasets/akashnath29/brain-tumor-csv/code
    Explore at:
    zip(538175483 bytes)Available download formats
    Dataset updated
    Oct 30, 2024
    Authors
    Akash Nath
    License

    Open Database License (ODbL) v1.0https://www.opendatacommons.org/licenses/odbl/1.0/
    License information was derived automatically

    Description

    This dataset provides grayscale pixel values for brain tumor MRI images, stored in a CSV format for simplified access and ease of use. The goal is to create a "MNIST-like" dataset for brain tumors, where each row in the CSV file represents the pixel values of a single image in its original resolution. This format makes it convenient for researchers and developers to quickly load and analyze MRI data for brain tumor detection, classification, and segmentation tasks without needing to handle large image files directly.

    Motivation and Use Cases

    Brain tumor classification and segmentation are critical tasks in medical imaging, and datasets like these are valuable for developing and testing machine learning and deep learning models. While there are several publicly available brain tumor image datasets, they often consist of large image files that can be challenging to process. This CSV-based dataset addresses that by providing a compact and accessible format. Potential use cases include: - Tumor Classification: Identifying different types of brain tumors, such as glioma, meningioma, and pituitary tumors, or distinguishing between tumor and non-tumor images. - Tumor Segmentation: Applying pixel-level classification and segmentation techniques for tumor boundary detection. - Educational and Rapid Prototyping: Ideal for educational purposes or quick experimentation without requiring large image processing capabilities.

    Data Structure

    This dataset is structured as a single CSV file where each row represents an image, and each column represents a grayscale pixel value. The pixel values are stored as integers ranging from 0 (black) to 255 (white).

    CSV File Contents

    • Pixel Values: Each row contains the pixel values of a single grayscale image, flattened into a 1-dimensional array. The original image dimensions vary, and rows in the CSV will correspondingly vary in length.
    • Simplified Access: By using a CSV format, this dataset avoids the need for specialized image processing libraries and can be easily loaded into data analysis and machine learning frameworks like Pandas, Scikit-Learn, and TensorFlow.

    How to Use This Dataset

    1. Loading the Data: The CSV can be loaded using standard data analysis libraries, making it compatible with Python, R, and other platforms.
    2. Data Preprocessing: Users may normalize pixel values (e.g., between 0 and 1) for deep learning applications.
    3. Splitting Data: While this dataset does not predefine training and testing splits, users can separate rows into training, validation, and test sets.
    4. Reshaping for Models: If needed, each row can be reshaped to the original dimensions (retrieved from the subfolder structure) to view or process as an image.

    Technical Details

    • Image Format: Grayscale MRI images, with pixel values ranging from 0 to 255.
    • Resolution: Original resolution, no resizing applied.
    • Size: Each row’s length varies according to the original dimensions of each MRI image.
    • Data Type: CSV file with integer pixel values.

    Acknowledgments

    This dataset is intended for research and educational purposes only. Users are encouraged to cite and credit the original data sources if using this dataset in any publications or projects. This is a derived CSV version aimed to simplify access and usability for machine learning and data science applications.

  9. Emotion Prediction with Quantum5 Neural Network AI

    • kaggle.com
    zip
    Updated Oct 19, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    EMİRHAN BULUT (2025). Emotion Prediction with Quantum5 Neural Network AI [Dataset]. https://www.kaggle.com/datasets/emirhanai/emotion-prediction-with-semi-supervised-learning
    Explore at:
    zip(2332683 bytes)Available download formats
    Dataset updated
    Oct 19, 2025
    Authors
    EMİRHAN BULUT
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Emotion Prediction with Quantum5 Neural Network AI Machine Learning - By Emirhan BULUT

    V1

    I have created an artificial intelligence software that can make an emotion prediction based on the text you have written using the Semi Supervised Learning method and the RC algorithm. I used very simple codes and it was a software that focused on solving the problem. I aim to create the 2nd version of the software using RNN (Recurrent Neural Network). I hope I was able to create an example for you to use in your thesis and projects.

    V2

    I decided to apply a technique that I had developed in the emotion dataset that I had used Semi-Supervised learning in Machine Learning methods before. This technique is produced according to Quantum5 laws. I developed a smart artificial intelligence software that can predict emotion with Quantum5 neuronal networks. I share this software with all humanity as open source on Kaggle. It is my first open source project in NLP system with Quantum technology. Developing the NLP system with Quantum technology is very exciting!

    Happy learning!

    Emirhan BULUT

    Head of AI and AI Inventor

    Emirhan BULUT. (2022). Emotion Prediction with Quantum5 Neural Network AI [Data set]. Kaggle. https://doi.org/10.34740/KAGGLE/DS/2129637

    The coding language used:

    Python 3.9.8

    Libraries Used:

    Keras

    Tensorflow

    NumPy

    Pandas

    Scikit-learn (SKLEARN)

    https://raw.githubusercontent.com/emirhanai/Emotion-Prediction-with-Semi-Supervised-Learning-of-Machine-Learning-Software-with-RC-Algorithm---By/main/Quantum%205.png" alt="Emotion Prediction with Quantum5 Neural Network on AI - Emirhan BULUT">

    https://raw.githubusercontent.com/emirhanai/Emotion-Prediction-with-Semi-Supervised-Learning-of-Machine-Learning-Software-with-RC-Algorithm---By/main/Emotion%20Prediction%20with%20Semi%20Supervised%20Learning%20of%20Machine%20Learning%20Software%20with%20RC%20Algorithm%20-%20By%20Emirhan%20BULUT.png" alt="Emotion Prediction with Semi Supervised Learning of Machine Learning Software with RC Algorithm - Emirhan BULUT">

    Developer Information:

    Name-Surname: Emirhan BULUT

    Contact (Email) : emirhan@isap.solutions

    LinkedIn : https://www.linkedin.com/in/artificialintelligencebulut/

    Kaggle: https://www.kaggle.com/emirhanai

    Official Website: https://www.emirhanbulut.com.tr

  10. Air Quality Index Prediction using Neural Networks

    • kaggle.com
    zip
    Updated Oct 27, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Moiz Azad (2025). Air Quality Index Prediction using Neural Networks [Dataset]. https://www.kaggle.com/datasets/moizkhan00/air-quality-index-prediction-using-neural-networks
    Explore at:
    zip(1290288 bytes)Available download formats
    Dataset updated
    Oct 27, 2025
    Authors
    Moiz Azad
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    🌍 Air Quality Index (AQI) Prediction using Neural Networks

    This notebook focuses on predicting Air Quality Index (AQI) values by estimating Carbon Monoxide (CO) concentration using a Neural Network Regression Model trained on environmental pollutant data.

    The model follows the EPA (Environmental Protection Agency) standard formula for converting CO concentration (in ppm) to AQI levels.

    ⚙️ Workflow Overview

    1. Data Preprocessing

      • Cleaned and normalized the dataset
      • Removed date/time and irrelevant columns
      • Scaled input and output features using MinMaxScaler
    2. Model Building (Neural Network)

      • Built a deep regression model using TensorFlow/Keras
      • Activation: ReLU
      • Optimizer: Adam
      • Loss: Mean Squared Error (MSE)
    3. Prediction Phase

      • Model predicts CO concentration based on given input features
      • Predictions are inverse-transformed to get real-world ppm values
    4. AQI Calculation (EPA Standard)

      • AQI computed using the official EPA breakpoint formula
      • Converts CO ppm into an AQI score ranging from 0–500
    5. Visualization

      • Distribution of pollutants
      • Correlation heatmap
      • Comparison of Predicted CO vs AQI Levels
      • AQI Category visualization

    🧠 Why This Project?

    Air pollution is one of the most pressing global issues today.
    By combining machine learning with environmental science, this notebook helps predict pollution levels and interpret air quality using AI-driven insights.

    📊 Tech Stack

    • Python
    • TensorFlow / Keras
    • NumPy, Pandas, Matplotlib, Seaborn
    • Scikit-learn

    🏁 Results

    ✅ Accurate CO prediction using neural network regression
    ✅ Dynamic AQI computation based on EPA standards
    ✅ Clear and intuitive visualizations

    🚀 "AI can’t clean the air — but it can help us understand how bad it really is."

  11. GitHub Commit Messages Dataset

    • kaggle.com
    zip
    Updated Mar 2, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dhruvil Dave (2021). GitHub Commit Messages Dataset [Dataset]. https://www.kaggle.com/dsv/1988456
    Explore at:
    zip(561489165 bytes)Available download formats
    Dataset updated
    Mar 2, 2021
    Authors
    Dhruvil Dave
    License

    Open Data Commons Attribution License (ODC-By) v1.0https://www.opendatacommons.org/licenses/by/1.0/
    License information was derived automatically

    Description

    https://github.githubassets.com/images/modules/site/home/footer-illustration.svg" alt="GitHub">

    Image credits: https://github.com

    Introduction

    This is a dataset that contains all commit messages and its related metadata from 32 popular GitHub repositories. These repositories are:

    • tensorflow/tensorflow
    • pytorch/pytorch
    • torvalds/linux
    • python/cpython
    • rust-lang/rust
    • microsoft/TypeScript
    • microsoft/vscode
    • golang/go
    • numpy/numpy
    • scikit-learn/scikit-learn
    • openbsd/src
    • freebsd/freebsd-src
    • pandas-dev/pandas
    • scipy/scipy
    • tidyverse/ggplot2
    • kubernetes/kubernetes
    • postgres/postgres
    • nodejs/node
    • facebook/react
    • angular/angular
    • matplotlib/matplotlib
    • apache/httpd
    • nginx/nginx
    • opencv/opencv
    • ipython/ipython
    • rstudio/rstudio
    • jupyterlab/jupyterlab
    • gcc-mirror/gcc
    • apple/swift
    • denoland/deno
    • apache/spark
    • llvm/llvm-project

    Credits

    Image credits: Unsplash - yancymin

  12. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Adarsh Vasa (2022). Neural Networks in Friction Factor Analysis of Smooth Pipe Bends [Dataset]. http://doi.org/10.17632/sjvbwh5ckg.1

Neural Networks in Friction Factor Analysis of Smooth Pipe Bends

Explore at:
3 scholarly articles cite this dataset (View in Google Scholar)
Dataset updated
Dec 19, 2022
Authors
Adarsh Vasa
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

PROGRAM SUMMARY No. of lines in distributed program, including test data, etc.: 481 No. of bytes in distributed program, including test data, etc.: 14540.8 Distribution format: .py, .csv Programming language: Python Computer: Any workstation or laptop computer running TensorFlow, Google Colab, Anaconda, Jupyter, pandas, NumPy, Microsoft Azure and Alteryx. Operating system: Windows and Mac OS, Linux.

Nature of problem: Navier-Stokes equations are solved numerically in ANSYS Fluent using Reynolds stress model for turbulence. The simulated values of friction factor are validated with theoretical and experimental data obtained from literature. Artificial neural networks are then used for a prediction-based augmentation of friction factor. The capabilities of the neural networks is discussed, in regard to computational cost and domain limitations.

Solution method: The simulation data is obtained through Reynolds stress modelling of fluid flow through pipe. This data is augmented using the artificial neural network model that predicts within and without data domain.

Restrictions: The code used in this research is limited to smooth pipe bends, in which friction factor is analysed using a steady state incompressible fluid flow.

Runtime: The artificial neural network produces results within a span of 20 seconds for three-dimensional geometry, using the allocated free computational resources of Google Colaboratory cloud-based computing system.

Search
Clear search
Close search
Google apps
Main menu