MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Keras scripts for creating a simplified denoising diffusion model as well as a trained model ready for reuse. Diffusion models are obviously very popular when it comes to generative AI and since I wanted to try out some ideas and wanted to know exactly what was going on I decided to create a few script for experimentation. I thought it might be helpful to some (including myself) if I shared the files, so here they are. Feel free to use them in any way you want! ๐
The files are:
diffusion_model_small_output.keras - The model trained on a car dataset
**train_autobot.py **- Script for training the model on a dataset. In my tests I needed at least to train on 1 million batches (a batch consisting of 10 versions with various noise intensities of the same input image) in order to get decent results.
test_autobot.py - Script for testing the keras model. Be aware it takes some time to load the large model!
model_output_test.jpg - A test output (the result from a run of the test script)
I have experimented with various model architectures but obviously not exhausted all possibilities. The model architecture is probably overly large for simple datasets and can probably be slimmed down. There appear to be some mode collapse appearing so further experimentation with model architectures, input augmentation and especially loss functions will probably be beneficial.
Looking forward to seeing your feedback! ๐
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset contains images categorized into sehat and tidak sehat , organized into train , test , and validation folders, each with subfolders for each class ( /sehat and /tidak sehat ). Images are in JPEG or PNG format with a recommended resolution of 240x240 pixels, suitable for the VGG16 modelโs input requirements. The dataset is intended for deep learning applications, viewable with standard image viewers, and executable with Python, particularly using TensorFlow and Keras . To access and run the VGG16 model, Google Colab or Jupyter Notebook can be used for cloud. For processing, an image data generator is set up to normalize the images, while VGG16 (with pre-trained ImageNet weights) serves as the base model with added dense layers for binary classification between sehat and tidak sehat . The model can then be compiled with an optimizer (e.g., Adam) and trained on the data with appropriate evaluation on validation and test sets.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
A spatio-temporal (ST) machine learning (ML) model for security-constrained unit commitment (SCUC) solution acceleration. The ML architecture with GNN and LSTM layers. Includes two models, one for node prediction to predict generator commitment status, and another for edge prediction, which predicts congested lines in the system. The predictions from the two models are then used to reduce the number of variables and constraints in a SCUC problem.NOTE: Codes are implemented in Python. ML model uses Keras, Tensorflow and Spektral (GNN) libraries. Optimization is implemented using Pyomo in python. A solver license (cplex/gurobi) is required for pyomo to run.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Keras scripts for creating a simplified denoising diffusion model as well as a trained model ready for reuse. Diffusion models are obviously very popular when it comes to generative AI and since I wanted to try out some ideas and wanted to know exactly what was going on I decided to create a few script for experimentation. I thought it might be helpful to some (including myself) if I shared the files, so here they are. Feel free to use them in any way you want! ๐
The files are:
diffusion_model_small_output.keras - The model trained on a car dataset
**train_autobot.py **- Script for training the model on a dataset. In my tests I needed at least to train on 1 million batches (a batch consisting of 10 versions with various noise intensities of the same input image) in order to get decent results.
test_autobot.py - Script for testing the keras model. Be aware it takes some time to load the large model!
model_output_test.jpg - A test output (the result from a run of the test script)
I have experimented with various model architectures but obviously not exhausted all possibilities. The model architecture is probably overly large for simple datasets and can probably be slimmed down. There appear to be some mode collapse appearing so further experimentation with model architectures, input augmentation and especially loss functions will probably be beneficial.
Looking forward to seeing your feedback! ๐