Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data and code for reproducing figures in published work.
High Power Laser Science and Engineering
https://doi.org/10.1017/hpl.2022.47
Code used various python packages including tensorflow.
Conda environment was created with (on 6th Jan 2022)
conda create --name tf tensorflow notebook tensorflow-probability pandas tqdm scikit-learn matplotlib seaborn protobuf opencv scipy scikit-image scikit-optimize Pillow PyAbel libclang flatbuffers gast --channel conda-forge
Facebook
TwitterIt has long been a challenge to quantify the variation in dynamic motions to understand how those displays function in animal communication. The traditional approach is dependent on labor-intensive manual identification/annotation by experts. However, the recent progress in computational techniques provides researchers with toolsets for rapid, objective, and reproducible quantification of dynamic visual displays. In the present study, we investigated the effects of diet manipulation on dynamic visual components of male courtship displays of Rabidosa rabida wolf spiders using machine learning algorithms. Our results suggest that (i) the computational approach can provide an insight into the variation in the dynamic visual display between high- and low-diet males which is not clearly shown with the traditional approach and (ii) males may plastically alter their courtship display according to the body size of females they encounter. Through the present study, we add an example of the utili..., Raw data - We recorded male courtship with a Photron Fastcam 1024 PCI 100k high-speed camera (Photron USA, San Diego, CA, USA) and a Sony DCR-HC65 NTSC Handycam (Sony Electronics Inc., USA). Then, we analyzed the movement of the foreleg and pedipalps during the selected courtship bouts using ProAnalyst Lite software (Xcitex Inc., Woburn, Massachusetts, USA). We first set the x-axis and y-axis by where the pedipalp tip was in contact with the substrate (y-position 0) and most posterior point of the abdomen (x-position 0) at the beginning of the courtship bout. When the foreleg or pedipalps did not move during the courtship bout, the location of the joint was recorded by the location of the parts at the cocked position. In the case of the image being blurred, the location of blurred points was guessed based on the previous or subsequent frames or other parts in the current frame., , # Computational analyses of the courtship dance of male wolf spiders
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
PROGRAM SUMMARY No. of lines in distributed program, including test data, etc.: 481 No. of bytes in distributed program, including test data, etc.: 14540.8 Distribution format: .py, .csv Programming language: Python Computer: Any workstation or laptop computer running TensorFlow, Google Colab, Anaconda, Jupyter, pandas, NumPy, Microsoft Azure and Alteryx. Operating system: Windows and Mac OS, Linux.
Nature of problem: Navier-Stokes equations are solved numerically in ANSYS Fluent using Reynolds stress model for turbulence. The simulated values of friction factor are validated with theoretical and experimental data obtained from literature. Artificial neural networks are then used for a prediction-based augmentation of friction factor. The capabilities of the neural networks is discussed, in regard to computational cost and domain limitations.
Solution method: The simulation data is obtained through Reynolds stress modelling of fluid flow through pipe. This data is augmented using the artificial neural network model that predicts within and without data domain.
Restrictions: The code used in this research is limited to smooth pipe bends, in which friction factor is analysed using a steady state incompressible fluid flow.
Runtime: The artificial neural network produces results within a span of 20 seconds for three-dimensional geometry, using the allocated free computational resources of Google Colaboratory cloud-based computing system.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
PyLibAPIs.7z : contains public API data (mongodb dump) for these frameworks:
TensorFlow
Keras
scikit-learn
Pandas
Flask
Django
Label.xlsx: cintains issues and their labels
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data product and code for: Ehmen et al.: Spatiotemporal Distribution of Dissolved Inorganic Carbon in the Global Ocean Interior - Reconstructed through Machine Learning
Note that due to the data limit on Zenodo only a compressed version of the ensemble mean is uploaded here (compressed_DIC_mean_15fold_ensemble_aveRMSE7.46_0.15TTcasts_1990-2023.nc). Individual ensemble members can be generated through the weight and scaler files found in weights_and_scalers_DIC_paper.zip and the code "ResNet_DIC_loading_past_prediction_2024-12-28.py" (see description below).
Prerequisites: Python running the modules tensorflow, shap, xarray, pandas and scipy. Plots additionally use matplotlib, cartopy, seaborn, statsmodels, gsw and cmocean.
The main scripts used to generate reconstructions are “ResNet_DIC_2024-12-28.py” (for new training runs) and “ResNet_DIC_loading_past_prediction_2024-12-28.py” (for already trained past weight and scaler files). Usage:
Once a reconstruction has been generated the following scripts found in the subdirectory “working_with_finished_reconstructions” can be used:
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains the codes to reproduce the five different segmentation results of the paper Lee et al (2021). The original dataset before applying these segmentation codes could be found in Ruf & Steeb (2020). The adopted segmentation methods in order to identify the micro fractures within the original dataset are the Local threshold, Sato, Chan-Vese, Random forest and U-net model. The Local threshold, Sato and U-net models are written in Python. The codes require a version above Python 3.7.7 with tensorflow, keras, pandas, scipy, scikit and numpy libraries. The workflow of the Chan-Vese method is interpreted in Matlab2018b. The result of the Random forest method could be reproduced with the uploaded trained model in an open source program ImageJ and trainableWeka library. For further details of operation, please refer to the readme.txt file.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
| Label | Species Name | Image Count |
|---|---|---|
| 1 | American Goldfinch | 143 |
| 2 | Emperor Penguin | 139 |
| 3 | Downy Woodpecker | 137 |
| 4 | Flamingo | 132 |
| 5 | Carmine Bee-eater | 131 |
| 6 | Barn Owl | 129 |
📂 Dataset Highlights: * Total Images: 811 * Classes: 6 unique bird species * Balanced Labels: Nearly equal distribution across classes * Use Cases: Image classification, model benchmarking, transfer learning, educational projects, biodiversity analysis
🧠 Potential Applications: * Training deep learning models like CNNs for bird species recognition * Fine-tuning pre-trained models using a small and balanced dataset * Educational projects in ornithology and computer vision * Biodiversity and wildlife conservation tech solutions
🛠️ Suggested Tools: * Python (Pandas, NumPy, Matplotlib) * TensorFlow / PyTorch for model development * OpenCV for image preprocessing * Streamlit for creating interactive demos
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains the codes to reproduce the results of "Time resolved micro-XRCT dataset of Enzymatically Induced Calcite Precipitation (EICP) in sintered glass bead columns", cf. https://doi.org/10.18419/darus-2227. The code takes "low-dose" images as an input where the images contain many artifacts and noise as a trade-off of a fast data acquisition (6 min / dataset while 3 hours / dataset ("high-dose") in normal configuration). These low quality images are able to be improved with the help of a pre-trained model. The pre-trained model provided in here is trained with pairs of "high-dose" and "low-dose" data of above mentioned EICP application. The examples of used training, input and output data can be also found in this dataset. Although we showed only limited examples in here, we would like to emphasize that the used workflow and codes can be further extended to general image enhancement applications. The code requires a Python version above 3.7.7 with packages such as tensorflow, kears, pandas, scipy, scikit, numpy and patchify libraries. For further details of operation, please refer to the readme.txt file.
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
This repository contains three folders which contain either the data or the source code for the three main chapters (Chapter 3, 4, and 5) in the thesis. Those folders are 1) Dataset (Chapter 3): This file contains phonocardigrams signals (/PhysioNet2016) used in Chapter 3 and 4 as the upstream pretraining data. This is a public dataset. /SourceCode includes all the statistical analysis and visualization scripts for Chapter 3. Yaseen_dataset and PASCAL contain phonocardigrams signals with pathological features, Yaseen_dataset serves as the downstream finetuning dataset in Chapter 3, while PASCAL datasets serves as the secondary testing dataset in Chapter 3. 2) Dataset (Chapter 4): /SourceCode includes all the statistical analysis and visualization scripts for Chapter 4. 3) Dataset (Chapter 5): PAD-UFES-20_processed contains dermatology images processed from the PAD-UFES-20 dataset, which is a public dataset. The dataset is used in the Chapter 5. And /SourceCode includes all the statistical analysis and visualization scripts for Chapter 5.Several packges are mendatory to run the source code, including:Python > 3.6 (3.11 preferred), TensorFlow > 2.16, Keras > 3.3, NumPy > 1.26, Pandas > 2.2, SciPy > 1.13
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
DeepB3Pred uses the following dependencies:MATLAB2018a python 3.10 numpy scipy pandas scikit-learn catboost= 1.1.1 gc_forset xgboost-1.5.0 tensorflow=1.15.0 Keras=2.1.6Guiding principles:The data contains a training dataset and a testing dataset. The training dataset TR_BB.fasta includes BBB_pos and BBB_neg training samples. Testing dataset TS_BB includes BBB_pos and BBB_neg testing samplesFeature_Extraction: CPSR is the implementation of component protein sequence representation. MCTD is the implementation of composition-transition and distribution, and GSFE is the implementation of graphical and statistical-based feature engineering.Classifier: DeepB3Pred.py is the implementation of the proposed method to predict B3PPs and non-B3PPs.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
V1
I have created an artificial intelligence software that can make an emotion prediction based on the text you have written using the Semi Supervised Learning method and the RC algorithm. I used very simple codes and it was a software that focused on solving the problem. I aim to create the 2nd version of the software using RNN (Recurrent Neural Network). I hope I was able to create an example for you to use in your thesis and projects.
V2
I decided to apply a technique that I had developed in the emotion dataset that I had used Semi-Supervised learning in Machine Learning methods before. This technique is produced according to Quantum5 laws. I developed a smart artificial intelligence software that can predict emotion with Quantum5 neuronal networks. I share this software with all humanity as open source on Kaggle. It is my first open source project in NLP system with Quantum technology. Developing the NLP system with Quantum technology is very exciting!
Happy learning!
Emirhan BULUT
Head of AI and AI Inventor
Emirhan BULUT. (2022). Emotion Prediction with Quantum5 Neural Network AI [Data set]. Kaggle. https://doi.org/10.34740/KAGGLE/DS/2129637
Python 3.9.8
Keras
Tensorflow
NumPy
Pandas
Scikit-learn (SKLEARN)
https://raw.githubusercontent.com/emirhanai/Emotion-Prediction-with-Semi-Supervised-Learning-of-Machine-Learning-Software-with-RC-Algorithm---By/main/Quantum%205.png" alt="Emotion Prediction with Quantum5 Neural Network on AI - Emirhan BULUT">
https://raw.githubusercontent.com/emirhanai/Emotion-Prediction-with-Semi-Supervised-Learning-of-Machine-Learning-Software-with-RC-Algorithm---By/main/Emotion%20Prediction%20with%20Semi%20Supervised%20Learning%20of%20Machine%20Learning%20Software%20with%20RC%20Algorithm%20-%20By%20Emirhan%20BULUT.png" alt="Emotion Prediction with Semi Supervised Learning of Machine Learning Software with RC Algorithm - Emirhan BULUT">
Name-Surname: Emirhan BULUT
Contact (Email) : emirhan@isap.solutions
LinkedIn : https://www.linkedin.com/in/artificialintelligencebulut/
Kaggle: https://www.kaggle.com/emirhanai
Official Website: https://www.emirhanbulut.com.tr
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This project aims to develop a model for identifying five different flower species (rose, tulip, sunflower, dandelion, daisy) using Convolutional Neural Networks (CNNs).
The dataset consists of 5,000 images (1,000 images per class) collected from various online sources. The model achieved an accuracy of 98.58% on the test set. Usage
TensorFlow: For making Neural Networks numpy: For numerical computing and array operations. pandas: For data manipulation and analysis. matplotlib: For creating visualizations such as line plots, bar plots, and histograms. seaborn: For advanced data visualization and creating statistically-informed graphics. scikit-learn: For machine learning algorithms and model training. To run the project:
Install the required libraries. Run the Jupyter Notebook: jupyter notebook flower_classification.ipynb Additional Information Link to code: https://github.com/Harshjaglan01/flower-classification-cnn License: MIT License
Facebook
Twitterhttps://www.wiseguyreports.com/pages/privacy-policyhttps://www.wiseguyreports.com/pages/privacy-policy
| BASE YEAR | 2024 |
| HISTORICAL DATA | 2019 - 2023 |
| REGIONS COVERED | North America, Europe, APAC, South America, MEA |
| REPORT COVERAGE | Revenue Forecast, Competitive Landscape, Growth Factors, and Trends |
| MARKET SIZE 2024 | 4.65(USD Billion) |
| MARKET SIZE 2025 | 5.51(USD Billion) |
| MARKET SIZE 2035 | 30.0(USD Billion) |
| SEGMENTS COVERED | Application, Deployment Mode, End User, Framework Type, Regional |
| COUNTRIES COVERED | US, Canada, Germany, UK, France, Russia, Italy, Spain, Rest of Europe, China, India, Japan, South Korea, Malaysia, Thailand, Indonesia, Rest of APAC, Brazil, Mexico, Argentina, Rest of South America, GCC, South Africa, Rest of MEA |
| KEY MARKET DYNAMICS | rising demand for automation, increasing big data utilization, growth in AI adoption, need for real-time analytics, surge in cloud-based services |
| MARKET FORECAST UNITS | USD Billion |
| KEY COMPANIES PROFILED | NVIDIA, Apache, Microsoft, H2O.ai, Google, Alteryx, Oracle, C3.ai, Pandas, DataRobot, Facebook, Amazon, RapidMiner, Keras, TensorFlow, IBM |
| MARKET FORECAST PERIOD | 2025 - 2035 |
| KEY MARKET OPPORTUNITIES | Increased demand for automation, Growth in AI-based applications, Expansion in edge computing, Rising need for real-time data processing, Surge in personalized customer experiences |
| COMPOUND ANNUAL GROWTH RATE (CAGR) | 18.4% (2025 - 2035) |
Facebook
TwitterAttribution-ShareAlike 3.0 (CC BY-SA 3.0)https://creativecommons.org/licenses/by-sa/3.0/
License information was derived automatically
Imports:
# All Imports
import os
from matplotlib import pyplot as plt
import pandas as pd
from sklearn.calibration import LabelEncoder
import seaborn as sns
import matplotlib.image as mpimg
import cv2
import numpy as np
import pickle
# Tensflor and Keras Layer and Model and Optimize and Loss
import tensorflow as tf
from tensorflow import keras
from keras import Sequential
from keras.layers import *
#Kernel Intilizer
from keras.optimizers import Adamax
# PreTrained Model
from keras.applications import *
#Early Stopping
from keras.callbacks import EarlyStopping
import warnings
Warnings Suppression | Configuration
# Warnings Remove
warnings.filterwarnings("ignore")
# Define the base path for the training folder
base_path = 'jaguar_cheetah/train'
# Weights file
weights_file = 'Model_train_weights.weights.h5'
# Path to the saved or to save the model:
model_file = 'Model-cheetah_jaguar_Treined.keras'
# Model history
history_path = 'training_history_cheetah_jaguar.pkl'
# Initialize lists to store file paths and labels
filepaths = []
labels = []
# Iterate over folders and files within the training directory
for folder in ['Cheetah', 'Jaguar']:
folder_path = os.path.join(base_path, folder)
for filename in os.listdir(folder_path):
file_path = os.path.join(folder_path, filename)
filepaths.append(file_path)
labels.append(folder)
# Create the TRAINING dataframe
file_path_series = pd.Series(filepaths , name= 'filepath')
Label_path_series = pd.Series(labels , name = 'label')
df_train = pd.concat([file_path_series ,Label_path_series ] , axis = 1)
# Define the base path for the test folder
directory = "jaguar_cheetah/test"
filepath =[]
label = []
folds = os.listdir(directory)
for fold in folds:
f_path = os.path.join(directory , fold)
imgs = os.listdir(f_path)
for img in imgs:
img_path = os.path.join(f_path , img)
filepath.append(img_path)
label.append(fold)
# Create the TEST dataframe
file_path_series = pd.Series(filepath , name= 'filepath')
Label_path_series = pd.Series(label , name = 'label')
df_test = pd.concat([file_path_series ,Label_path_series ] , axis = 1)
# Display the first rows of the dataframe for verification
#print(df_train)
# Folders with Training and Test files
data_dir = 'jaguar_cheetah/train'
test_dir = 'jaguar_cheetah/test'
# Image size 256x256
IMAGE_SIZE = (256,256)
Tain | Test
#print('Training Images:')
# Create the TRAIN dataframe
train_ds = tf.keras.utils.image_dataset_from_directory(
data_dir,
validation_split=0.1,
subset='training',
seed=123,
image_size=IMAGE_SIZE,
batch_size=32)
#Testing Data
#print('Validation Images:')
validation_ds = tf.keras.utils.image_dataset_from_directory(
data_dir,
validation_split=0.1,
subset='validation',
seed=123,
image_size=IMAGE_SIZE,
batch_size=32)
print('Testing Images:')
test_ds = tf.keras.utils.image_dataset_from_directory(
test_dir,
seed=123,
image_size=IMAGE_SIZE,
batch_size=32)
# Extract labels
train_labels = train_ds.class_names
test_labels = test_ds.class_names
validation_labels = validation_ds.class_names
# Encode labels
# Defining the class labels
class_labels = ['CHEETAH', 'JAGUAR']
# Instantiate (encoder) LabelEncoder
label_encoder = LabelEncoder()
# Fit the label encoder on the class labels
label_encoder.fit(class_labels)
# Transform the labels for the training dataset
train_labels_encoded = label_encoder.transform(train_labels)
# Transform the labels for the validation dataset
validation_labels_encoded = label_encoder.transform(validation_labels)
# Transform the labels for the testing dataset
test_labels_encoded = label_encoder.transform(test_labels)
# Normalize the pixel values
# Train files
train_ds = train_ds.map(lambda x, y: (x / 255.0, y))
# Validate files
validation_ds = validation_ds.map(lambda x, y: (x / 255.0, y))
# Test files
test_ds = test_ds.map(lambda x, y: (x / 255.0, y))
#TRAINING VISUALIZATION
#Count the occurrences of each category in the column
count = df_train['label'].value_counts()
# Create a figure with 2 subplots
fig, axs = plt.subplots(1, 2, figsize=(12, 6), facecolor='white')
# Plot a pie chart on the first subplot
palette = sns.color_palette("viridis")
sns.set_palette(palette)
axs[0].pie(count, labels=count.index, autopct='%1.1f%%', startangle=140)
axs[0].set_title('Distribution of Training Categories')
# Plot a bar chart on the second subplot
sns.barplot(x=count.index, y=count.values, ax=axs[1], palette="viridis")
axs[1].set_title('Count of Training Categories')
# Adjust the layout
plt.tight_layout()
# Visualize
plt.show()
# TEST VISUALIZATION
count = df_test['label'].value_counts()
# Create a figure with 2 subplots
fig, axs = plt.subplots(1, 2, figsize=(12, 6), facec...
Facebook
Twitterhttp://www.gnu.org/licenses/agpl-3.0.htmlhttp://www.gnu.org/licenses/agpl-3.0.html
First Version.. Cryptocurrency Prediction with Artificial Intelligence (Deep Learning via LSTM Neural Networks)- Emirhan BULUT I developed Cryptocurrency Prediction (Deep Learning with LSTM Neural Networks) software with Artificial Intelligence. I predicted the fall on December 28, 2021 with 98.5% accuracy in the XRP/USDT pair. '0.009179626158151918' MAE Score, '0.0002120391943355104' MSE Score, 98.35% Accuracy Question software has been completed.
The XRP/USDT pair forecast for December 28, 2021 was correctly forecasted based on data from Binance.
Software codes and information are shared with you as open source code free of charge on GitHub and My Personal Web Address.
Happy learning!
Emirhan BULUT
Senior Artificial Intelligence Engineer & Inventor
Python 3.9.8
Tensorflow - Keras
NumPy
Matplotlib
Pandas
Scikit-learn - (SKLEARN)
https://raw.githubusercontent.com/emirhanai/Cryptocurrency-Prediction-with-Artificial-Intelligence/main/XRP-1%20-%20PREDICTION.png" alt="Cryptocurrency Prediction with Artificial Intelligence (Deep Learning via LSTM Neural Networks)- Emirhan BULUT">
Name-Surname: Emirhan BULUT
Contact (Email) : emirhan@isap.solutions
LinkedIn : https://www.linkedin.com/in/artificialintelligencebulut/
Kaggle: https://www.kaggle.com/emirhanai
Official Website: https://www.emirhanbulut.com.tr
Facebook
Twitterhttp://www.gnu.org/licenses/agpl-3.0.htmlhttp://www.gnu.org/licenses/agpl-3.0.html
I developed an artificial intelligence software that predicts your Age and Gender. It has a 93% accuracy rate. I'm 21 years old and he predicted my age 100% correctly! I adjusted the algorithm and prepared the codes. A system that works together with Neural Networks in the Deep Learning system. I used Convolutional Layers from Convolutional Neural Networks. I am pleased to present this software for humanity. Doctoral students can use it in their theses or various companies can use this software! Upload your photo, guess your age and gender!
Kind regards,
Emirhan BULUT
Head of AI & AI Inventor
Python 3.9.8
TensorFlow
Keras
OpenCV
MatPlotlib
NumPy
Pandas
Scikit-learn - (SKLEARN)
https://raw.githubusercontent.com/emirhanai/Age-and-Sex-Prediction-from-Image---Convolutional-Neural-Network-with-Artificial-Intelligence/main/Age%20and%20Sex%20Prediction%20from%20Image%20-%20Convolutional%20Neural%20Network%20with%20Artificial%20Intelligence.png" alt="Age and Sex Prediction from Image - Convolutional Neural Network with Artificial Intelligence">
Name-Surname: Emirhan BULUT
Contact (Email) : emirhan@isap.solutions
LinkedIn : https://www.linkedin.com/in/artificialintelligencebulut/
Kaggle: https://www.kaggle.com/emirhanai
Official Website: https://www.emirhanbulut.com.tr
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data and code for reproducing figures in published work.
High Power Laser Science and Engineering
https://doi.org/10.1017/hpl.2022.47
Code used various python packages including tensorflow.
Conda environment was created with (on 6th Jan 2022)
conda create --name tf tensorflow notebook tensorflow-probability pandas tqdm scikit-learn matplotlib seaborn protobuf opencv scipy scikit-image scikit-optimize Pillow PyAbel libclang flatbuffers gast --channel conda-forge