Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Dataset of models and their metadata obtained from CivitAI
This dataset is licensed under CC BY-NC 4.0, which allows for non-commercial use with proper attribution.
Column Preview
Model Data Preview (Version ID columns summarized)
Column Name Description Example Value
id Unique identifier for the model on CivitAI 4201
name Name of the model Realistic Vision V6.0 B1
type Type of model (e.g., Checkpoint, LoRA, etc.) Checkpoint
baseModel Base… See the full description on the dataset page: https://huggingface.co/datasets/pm-paper-datasets/Civ-Models.
The goal of the neuromuscular models library is to provide a resource for students, researchers, and clinicians to access, use, test, and develop models. The majority of models in this library are for use with OpenSIM and/or SIMM. Users who contribute models to the database can set up a project page where they can track who is using the model and contact with them.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
IHA Models is a dataset for object detection tasks - it contains Iha annotations for 481 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
The data describe freeway car-following behavior (such as velocity, acceleration, and relative position) for the car-following instances observed during 6 data collection runs, collected using an Instrumented Research Vehicle (IRV) along freeways and arterials in western Massachusetts in the summer of 2016 to better understand work zone driver behaviors. The USDOT Volpe National Transportation Systems Center (Volpe Center) identified, isolated, and classified individual car following instances from within the raw datasets (classification parameters included roadway type, level of congestion, and speed limit), then processed, refined, and cleaned the dataset. This table contains metadata about each data collection run. See also the instances table (https://datahub.transportation.gov/Automobiles/Enhancing-Microsimulation-Models-for-Improved-Work/74ug-57tr) and radar table (https://datahub.transportation.gov/Automobiles/Enhancing-Microsimulation-Models-for-Improved-Work/4qbx-egtn).
ftopal/huggingface-models-raw dataset hosted on Hugging Face and contributed by the HF Datasets community
The research done to evaluate how the predictivity of models are effected by error in either the training or the test set is simple to describe conceptually. Benchmark datasets are downloaded from reputable sources. Then the datasets are split into training and test sets. Randomized error is added and then models created on both error laden and native training sets. Those models are used to predict both error laden and native test sets. Differences in standard statistics commonly used to assess predictivity are observed. This dataset is associated with the following publication: Kolmar, S., and C. Grulke. The Effect of Noise on the Predictive Limit of QSAR Models. Journal of Cheminformatics. Springer, New York, NY, USA, 13: 92, (2021).
The 5-year goal of the “Model America” concept was to generate a model of every building in the United States. This data repository delivers on that goal with "Model America v1".Oak Ridge National Laboratory (ORNL) has developed the Automatic Building Energy Modeling (AutoBEM) software suite to process multiple types of data, extract building-specific descriptors, generate building energy models, and simulate them on High Performance Computing (HPC) resources. For more information, see AutoBEM-related publications (bit.ly/AutoBEM).There were 125,715,609 buildings detected in the United States. Of this number, 122,146,671 (97.2%) buildings resulted in a successful generation and simulation of a building energy model. This dataset includes the full 125 million buildings. Future updates may include additional buildings, data improvements, or other algorithmic model enhancements in "Model America v2".This dataset contains OSM and IDF zip files for every U.S. county. Each zip file contains the generated buildings from that county.The .csv input data contains the following data fields:1. ID - unique building ID2. Centroid - building center location in latitude/longitude (from Footprint2D)3. Footprint2D - building polygon of 2D footprint (lat1/lon1_lat2/lon2_...)4. State_abbr - state name5. Area - estimate of total conditioned floor area (ft2)6. Area2D - footprint area (ft2)7. Height - building height (ft)8. NumFloors - number of floors (above-grade)9. WWR_surfaces - percent of each facade (pair of points from Footprint2D) covered by fenestration/windows (average 14.5% for residential, 40% for commercial buildings)10. CZ - ASHRAE Climate Zone designation11. BuildingType - DOE prototype building designation (IECC=residential) as implemented by OpenStudio-standards12. Standard - building vintageThis data is made free and openly available in hopes of stimulating any simulation-informed use case. Data is provided as-is with no warranties, express or implied, regarding fitness for a particular purpose. We wish to thank our sponsors which include Oak Ridge National Laboratory (ORNL) Laboratory Directed Research and Development (LDRD), U.S. Dept. of Energy’s (DOE) Building Technologies Office (BTO), Office of Electricity (OE), Biological and Environmental Research (BER), and National Nuclear Security Administration (NNSA).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Korek Api Models is a dataset for object detection tasks - it contains Korek Api annotations for 200 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
This .stl file was produced by scaling the original model and converting it directly to .stl format.
Keyword models for a subset of the NASA Thesaurus (https://www.sti.nasa.gov/nasa-thesaurus/). These models were trained on the NASA Technical Reports Server (NTRS). These models can be used with the concept-tagging-api flask server (https://github.com/nasa/concept-tagging-api) to run a keyword predicting service.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This research introduces \segsub, a framework for applying targeted image perturbations to investigate VLM resilience against knowledge conflicts. Our analysis reveals distinct vulnerability patterns: while VLMs are robust to parametric conflicts (20% adherence rates), they exhibit significant weaknesses in identifying counterfactual conditions (
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Growth mixture models (GMMs) have been widely used to capture different growth trajectories of unobserved subpopulations (or latent classes). The traditional GMM determines the optimal number of classes through a process called class enumeration, which involves fitting a sequence of models with an increasing number of classes and then selecting the best-fitting model using statistical criteria. Despite its popularity, class enumeration has long been criticized for introducing severe subjectivity when comparing the fitted models.
Bayesian nonparametric (BNP) mixture modeling offers an alternative approach to detecting latent classes. The BNP approach circumvents the subjectivity inherent in class enumeration by placing a prior on the mixing distribution, which indirectly induces a prior on the number of classes. Consequently, the number of classes can be inferred directly from the data. However, the BNP approach remains understudied in the context of GMM. To reduce this research gap, the dissertation aims to: 1) propose two BNP-GMMs using the Dirichlet process mixture and the mixture of finite mixtures models; 2) compare the performance of the two proposed models in determining the number of classes $K$ with that of the traditional GMM; and 3) evaluate the performance of the two proposed models in choosing K when using the mode versus when using a loss function called variation of information (VI).
Based on Monte Carlo simulations, Study 1 compares the proposed models and the traditional GMM in choosing K when there is no model misspecification, while Study 2 compares them in choosing K when there is model misspecification in the latent mean structure. Overall, simulation results showed that: 1) the proposed models using VI were more accurate than using the mode; 2) when the population was homogeneous (comprising only one class), the proposed models using VI yielded the highest accuracy in choosing K; whereas, when the population was heterogeneous (consisting of three classes), the proposed models using VI achieved superior accuracy in choosing K when class separation was large; and 3) the proposed models using VI demonstrated robustness against exacerbated overfitting caused by model misspecification. For illustration, the proposed BNP-GMMs were applied to data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99.
Polygons: 51 Vertices: 32
Comparison of Represents the average of coding benchmarks in the Artificial Analysis Intelligence Index (LiveCodeBench & SciCode) by Model
The United States is by far the largest producer of notable machine learning programs in 2024, with **, ahead of China's **. It is notable that France, Germany, and UK, despite accounting for smaller economic size and population, now outproduce China on machine learning programs together, producing some ** models versus China's **.
Comparison of Image Input Price: USD per 1k images at 1MP (1024x1024) by Model
pr0mila/whisper-models dataset hosted on Hugging Face and contributed by the HF Datasets community
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
TERP is a post-hoc interpretation scheme for explaining black-box AI predictions. TERP works by constructing a linear, local interpretable model that approximates the black-box in the vicinity of the instance being explained. TERP determines the accuracy-interpretability trade-off by introducing and using the concept of interpretation entropy.This data repository contains the three trained machine learning models: VAMPnets, Vision Transformers models (ViT) - pre-trained (model.ckpt)+ fine-tuned (best-model.ckpt) + fine-tuned_data_randomized (bad-model.ckpt), attention-based bi-directional LSTM) trained on molecular dynamics simulation trajectory of alanine dipeptide, facial attributes of celebrities (CelebA), and Antonio Gulli’s (AG’s) news corpus respectively. The simulated trajectory (dihedral angles) for the molecular dynamics simulation is also provided.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Our most comprehensive database of AI models, containing over 800 models that are state of the art, highly cited, or otherwise historically notable. It tracks key factors driving machine learning progress and includes over 300 training compute estimates.
Comparison of Represents the average of math benchmarks in the Artificial Analysis Intelligence Index (AIME 2024 & Math-500) by Model
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Dataset of models and their metadata obtained from CivitAI
This dataset is licensed under CC BY-NC 4.0, which allows for non-commercial use with proper attribution.
Column Preview
Model Data Preview (Version ID columns summarized)
Column Name Description Example Value
id Unique identifier for the model on CivitAI 4201
name Name of the model Realistic Vision V6.0 B1
type Type of model (e.g., Checkpoint, LoRA, etc.) Checkpoint
baseModel Base… See the full description on the dataset page: https://huggingface.co/datasets/pm-paper-datasets/Civ-Models.