100+ datasets found
  1. d

    Data from: Benchmark Model for Wastewater Treatment Using an Activated...

    • catalog.data.gov
    • nawi.openei.org
    • +3more
    Updated Jan 20, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Princeton University (2025). Benchmark Model for Wastewater Treatment Using an Activated Sludge Process [Dataset]. https://catalog.data.gov/dataset/benchmark-model-for-wastewater-treatment-using-an-activated-sludge-process-22157
    Explore at:
    Dataset updated
    Jan 20, 2025
    Dataset provided by
    Princeton University
    Description

    This is benchmark model for wastewater treatment using an activated sludge process. The activated sludge process is a means of treating both municipal and industrial wastewater. The activated sludge process is a multi-chamber reactor unit that uses highly concentrated microorganisms to degrade organics and remove nutrients from wastewater, producing quality effluent. This model provides pollutant concentrations, mass balance, electricity requirements, and treatment costs. This model will be continuously updated based on the latest data.

  2. Kaggle, IHME and LANL Forecasts

    • kaggle.com
    Updated May 26, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anthony Goldbloom (2020). Kaggle, IHME and LANL Forecasts [Dataset]. https://www.kaggle.com/antgoldbloom/covid19-epidemiological-benchmarking-dataset/code
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 26, 2020
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Anthony Goldbloom
    Description

    Dataset

    This dataset was created by Anthony Goldbloom

    Contents

  3. P

    Data from: DTBM Dataset

    • paperswithcode.com
    Updated Nov 14, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Joern Ploennigs; Konstantinos Semertzidis; Fabio Lorenzi; Nandana Mihindukulasooriya (2022). DTBM Dataset [Dataset]. https://paperswithcode.com/dataset/dtbm
    Explore at:
    Dataset updated
    Nov 14, 2022
    Authors
    Joern Ploennigs; Konstantinos Semertzidis; Fabio Lorenzi; Nandana Mihindukulasooriya
    Description

    DTBM is a benchmark dataset for Digital Twins that reflects these characteristics and look into the scaling challenges of different knowledge graph technologies.

  4. Benchmark Results for Model

    • zenodo.org
    • data.niaid.nih.gov
    json
    Updated May 30, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Your Name; Your Name (2024). Benchmark Results for Model [Dataset]. http://doi.org/10.5281/zenodo.11397919
    Explore at:
    jsonAvailable download formats
    Dataset updated
    May 30, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Your Name; Your Name
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Results of the benchmark run, see attached JSON for details.

  5. P

    Pretrained Models of the Benchmarking Algorithms for UVCGAN Dataset

    • paperswithcode.com
    • opendatalab.com
    Updated Mar 8, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dmitrii Torbunov; Yi Huang; Haiwang Yu; Jin Huang; Shinjae Yoo; MeiFeng Lin; Brett Viren; Yihui Ren (2022). Pretrained Models of the Benchmarking Algorithms for UVCGAN Dataset [Dataset]. https://paperswithcode.com/dataset/pretrained-models-of-the-benchmarking
    Explore at:
    Dataset updated
    Mar 8, 2022
    Authors
    Dmitrii Torbunov; Yi Huang; Haiwang Yu; Jin Huang; Shinjae Yoo; MeiFeng Lin; Brett Viren; Yihui Ren
    Description

    The pretrained models from four image translation algorithms: ACL-GAN, Council-GAN, CycleGAN, and U-GAT-IT on three benchmarking datasets: Selfie2Anime, CelebA_gender, CelebA_glasses.

    We trained the models to provide benchmarks for the algorithm we detailed in the paper "UVCGAN: UNet Vision Transformer Cycle-consistent GAN for Unpaired Image-to-Image Translation.".

    We only trained a model if a pretrained model is provided by a benchmarking algorithm.

  6. d

    CAMELS benchmark models

    • search.dataone.org
    • hydroshare.org
    • +1more
    Updated Dec 5, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Frederik Kratzert (2021). CAMELS benchmark models [Dataset]. http://doi.org/10.4211/hs.474ecc37e7db45baa425cdb4fc1b61e1
    Explore at:
    Dataset updated
    Dec 5, 2021
    Dataset provided by
    Hydroshare
    Authors
    Frederik Kratzert
    Area covered
    Description

    This data set contains the model outputs of different hydrology models calibrated using the same forcing data (Maurer) and the same calibration period for the CAMELS data set. The models are: SAC-SMA, VIC, HBV, FUSE and mHM. All of these models have been calibrated for each basin separately. Additionally, for VIC and mHM, also regionally calibrated model outputs exist. All models have been calibrated using the period 1 October 1999 until 30 September 2008 and were validated in the period 1 October 1989 until 30 September 1999.

  7. i

    BioMASS Space Model Dataset Benchmark

    • ieee-dataport.org
    Updated May 18, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Candelaria Sansores (2022). BioMASS Space Model Dataset Benchmark [Dataset]. https://ieee-dataport.org/open-access/biomass-space-model-dataset-benchmark
    Explore at:
    Dataset updated
    May 18, 2022
    Authors
    Candelaria Sansores
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    a spatial model for situated multiagent systems that optimizes neighborhood search". In this paper we presented a new model to implement a spatially explicit environment that supports constant-time sensory (neighborhood search) and locomotion functions for situated multiagent systems.

  8. o

    Linear Time Varying System Examples for Model Order Reduction

    • explore.openaire.eu
    Updated Jul 26, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Norman Lang; Jens Saak; Tatjana Stykel (2017). Linear Time Varying System Examples for Model Order Reduction [Dataset]. http://doi.org/10.5281/zenodo.834971
    Explore at:
    Dataset updated
    Jul 26, 2017
    Authors
    Norman Lang; Jens Saak; Tatjana Stykel
    Description

    Three linear time varying system benchmarks implemented in MATLAB. 1.) a time varying version of the Oberwolfach Steel Cooling Benchmark 2.) a one dimensional heat equation with a moving point heat source 3.) a linearized Burgers equation Model 1 comes in the same 5 resolutions as the original time-invariant version. the other two are freely scalable. {"references": ["N. Lang, J. Saak, and T. Stykel, Balanced truncation model reduction for linear time-varying systems, Math. Comput. Model. Dyn. Syst., 22 (2016), pp. 267\u2013 281. doi:10.1080/13873954.2016.1198386."]}

  9. a

    Coding Index by Models Model

    • artificialanalysis.ai
    Updated May 15, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Artificial Analysis (2025). Coding Index by Models Model [Dataset]. https://artificialanalysis.ai/models
    Explore at:
    Dataset updated
    May 15, 2025
    Dataset authored and provided by
    Artificial Analysis
    Description

    Comparison of Represents the average of coding benchmarks in the Artificial Analysis Intelligence Index (LiveCodeBench & SciCode) by Model

  10. P

    RewardBench Dataset

    • paperswithcode.com
    Updated Jan 20, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nathan Lambert; Valentina Pyatkin; Jacob Morrison; LJ Miranda; Bill Yuchen Lin; Khyathi Chandu; Nouha Dziri; Sachin Kumar; Tom Zick; Yejin Choi; Noah A. Smith; Hannaneh Hajishirzi (2025). RewardBench Dataset [Dataset]. https://paperswithcode.com/dataset/rewardbench
    Explore at:
    Dataset updated
    Jan 20, 2025
    Authors
    Nathan Lambert; Valentina Pyatkin; Jacob Morrison; LJ Miranda; Bill Yuchen Lin; Khyathi Chandu; Nouha Dziri; Sachin Kumar; Tom Zick; Yejin Choi; Noah A. Smith; Hannaneh Hajishirzi
    Description

    RewardBench is a benchmark designed to evaluate the capabilities and safety of reward models, including those trained with Direct Preference Optimization (DPO). It serves as the first evaluation tool for reward models and provides valuable insights into their performance and reliability¹.

    Here are the key components of RewardBench:

    Common Inference Code: The repository includes common inference code for various reward models, such as Starling, PairRM, OpenAssistant, and more. These models can be evaluated using the provided tools¹.

    Dataset and Evaluation: The RewardBench dataset consists of prompt-win-lose trios spanning chat, reasoning, and safety scenarios. It allows benchmarking reward models on challenging, structured, and out-of-distribution queries. The goal is to enhance scientific understanding of reward models and their behavior².

    Scripts for Evaluation:

    scripts/run_rm.py: Used to evaluate individual reward models. scripts/run_dpo.py: Used to evaluate direct preference optimization (DPO) models. scripts/train_rm.py: A basic reward model training script built on TRL (Transformer Reinforcement Learning)¹.

    Installation and Usage:

    Install PyTorch on your system. Install the required dependencies using pip install -e .. Set the environment variable HF_TOKEN with your token. To contribute your model to the leaderboard, open an issue on HuggingFace with the model name. For local model evaluation, follow the instructions in the repository¹.

    Remember that RewardBench provides a standardized way to assess reward models, ensuring transparency and comparability across different approaches. 🌟🔍

    (1) GitHub - allenai/reward-bench: RewardBench: the first evaluation tool .... https://github.com/allenai/reward-bench. (2) RewardBench: Evaluating Reward Models for Language Modeling. https://arxiv.org/abs/2403.13787. (3) RewardBench: Evaluating Reward Models for Language Modeling. https://paperswithcode.com/paper/rewardbench-evaluating-reward-models-for.

  11. Performance of DeepSeek-R1 compared to similar models in Chinese benchmarks...

    • statista.com
    Updated Feb 3, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2025). Performance of DeepSeek-R1 compared to similar models in Chinese benchmarks 2025 [Dataset]. https://www.statista.com/statistics/1552890/deepseek-performance-of-deepseek-r1-compared-to-similar-models-by-chinese-benchmark/
    Explore at:
    Dataset updated
    Feb 3, 2025
    Dataset authored and provided by
    Statistahttp://statista.com/
    Time period covered
    Jan 2025
    Area covered
    China
    Description

    In a performance comparison on Chinese language benchmarking in 2025, DeepSeek's AI model Deepseek-R1 outperformed all other representative models, except the DeepSeek V3 model. The models from DeepSeek performed best in the mathematics and Chinese language benchmarks, and the weakest in coding.

  12. Z

    Benchmark Petri Net Models Used for the Evaluation of B-I-Sat

    • data.niaid.nih.gov
    • zenodo.org
    Updated Aug 3, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Darvas, Dániel (2024). Benchmark Petri Net Models Used for the Evaluation of B-I-Sat [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_200500
    Explore at:
    Dataset updated
    Aug 3, 2024
    Dataset authored and provided by
    Darvas, Dániel
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    Collection and documentation of the benchmark Petri net models used for the evaluation of the B-I-Sat algorithm.

  13. H

    Benchmark model for nearly-zero-energy terraced dwellings

    • dataverse.harvard.edu
    • explore.openaire.eu
    bin, pdf, tsv
    Updated Jul 6, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Harvard Dataverse (2021). Benchmark model for nearly-zero-energy terraced dwellings [Dataset]. http://doi.org/10.7910/DVN/GJI84W
    Explore at:
    bin(1548837), bin(2545843), bin(270116), tsv(898), bin(29748), bin(28051), bin(271202), pdf(516676), bin(1892084)Available download formats
    Dataset updated
    Jul 6, 2021
    Dataset provided by
    Harvard Dataverse
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    One building performance simulation benchmark model for nearly zero-energy dwellings in Brussels. The study reports an inventory and field survey conducted on a terraced house renovated after the year 2010. An analysis of energy consumption (electricity and natural gas) and a walkthrough survey were conducted. A building performance simulation model is created in EnergyPlus to benchmark the average energy consumption and building characteristics. The estimate's validity has been further checked against the public statistics and verified through model calibration and utility bill comparison. The benchmark has an average energy use intensity of 29 kWh/m2/year and represents terraced single-family houses after renovation.

  14. h

    ether0-benchmark

    • huggingface.co
    Updated Jun 5, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Future House (2025). ether0-benchmark [Dataset]. https://huggingface.co/datasets/futurehouse/ether0-benchmark
    Explore at:
    Dataset updated
    Jun 5, 2025
    Dataset authored and provided by
    Future House
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    ether0-benchmark

    QA benchmark (test set) for the ether0 reasoning language model: https://huggingface.co/futurehouse/ether0 This benchmark is made from commonly used tasks - like reaction prediction in USPTO/ORD, molecular captioning from PubChem, or predicting GHS classification. It's unique from other benchmarks in that all answers are a molecule. It's balanced so that each task is about 25 questions, a reasonable amount for frontier model evaluations. The tasks generally follow… See the full description on the dataset page: https://huggingface.co/datasets/futurehouse/ether0-benchmark.

  15. h

    arabic-broad-benchmark

    • huggingface.co
    Updated May 13, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    SILMA AI - Arabic Language Models (2025). arabic-broad-benchmark [Dataset]. https://huggingface.co/datasets/silma-ai/arabic-broad-benchmark
    Explore at:
    Dataset updated
    May 13, 2025
    Dataset authored and provided by
    SILMA AI - Arabic Language Models
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    Arabic Broad Benchmark (ABB)

    The Arabic Broad Benchmark is a unique dataset and an advanced benchmark created by SILMA.AI to assess the performance of Large Language Models in Arabic Language. ABB consists of 470 high quality human-validated questions sampled from 64 Arabic benchmarking datasets, evaluating 22 categories and skills. The advanced benchmarking script utilizes the dataset to evaluate models or APIs using a mix of 20+ Manual Rules and LLM as Judge variations customized… See the full description on the dataset page: https://huggingface.co/datasets/silma-ai/arabic-broad-benchmark.

  16. P

    BenchLMM Dataset

    • paperswithcode.com
    Updated Nov 21, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rizhao Cai; Zirui Song; Dayan Guan; Zhenhao Chen; Xing Luo; Chenyu Yi; Alex Kot (2024). BenchLMM Dataset [Dataset]. https://paperswithcode.com/dataset/benchlmm
    Explore at:
    Dataset updated
    Nov 21, 2024
    Authors
    Rizhao Cai; Zirui Song; Dayan Guan; Zhenhao Chen; Xing Luo; Chenyu Yi; Alex Kot
    Description

    Large Multimodal Models (LMMs) such as GPT-4V and LLaVA have shown remarkable capabilities in visual reasoning with common image styles. However, their robustness against diverse style shifts, crucial for practical applications, remains largely unexplored. In this paper, we propose a new benchmark, BenchLMM, to assess the robustness of LMMs against three different styles: artistic image style, imaging sensor style, and application style, where each style has five sub-styles. Utilizing BenchLMM, we comprehensively evaluate state-of-the-art LMMs and reveal: 1) LMMs generally suffer performance degradation when working with other styles; 2) An LMM performs better than another model in common style does not guarantee its superior performance in other styles; 3) LMMs' reasoning capability can be enhanced by prompting LMMs to predict the style first, based on which we propose a versatile and training-free method for improving LMMs; 4) An intelligent LMM is expected to interpret the causes of its errors when facing stylistic variations. We hope that our benchmark and analysis can shed new light on developing more intelligent and versatile LMMs.

  17. f

    Gravity forward modelling benchmark dataset

    • figshare.com
    • data.4tu.nl
    zip
    Updated Jun 4, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bart Root (2023). Gravity forward modelling benchmark dataset [Dataset]. http://doi.org/10.4121/19279163.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 4, 2023
    Dataset provided by
    4TU.ResearchData
    Authors
    Bart Root
    License

    https://www.gnu.org/licenses/gpl-3.0.htmlhttps://www.gnu.org/licenses/gpl-3.0.html

    Description

    This document describes the different benchmark codes to reproduce the figures from the SolidEarth article:Root, B. and Sebera, J. and Szwillus, W. and Thieulot, C. and Martinec, Z. and Fullea, J., Benchmark forward gravity schemes: the gravity field of a realistic lithosphere model WINTERC-G, Solid Earth Discussions, 2021, 1--36, 10.5194/se-2021-145.Three different benchmarks are discussed:- shell test 2 - Equal thickness lateral varying density- shell test 3 - Lateral varying density interface (CRUST1.0 MOHO)- WINTERC-grav benchmark full layered modelThe Matlab codes and data files are presented in the database.

  18. A Benchmark for 3D Interest Point Detection Algorithms

    • catalog.data.gov
    • data.nist.gov
    Updated Jul 29, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Institute of Standards and Technology (2022). A Benchmark for 3D Interest Point Detection Algorithms [Dataset]. https://catalog.data.gov/dataset/a-benchmark-for-3d-interest-point-detection-algorithms-2c04d
    Explore at:
    Dataset updated
    Jul 29, 2022
    Dataset provided by
    National Institute of Standards and Technologyhttp://www.nist.gov/
    Description

    This benchmark aims to provide tools to evaluate 3D Interest Point Detection Algorithms with respect to human generated ground truth. Using a web-based subjective experiment, human subjects marked 3D interest points on a set of 3D models. The models were organized in two datasets: Dataset A and Dataset B. Dataset A consists of 24 models which were hand-marked by 23 human subjects. Dataset B is larger with 43 models, and it contains all the models in Dataset B. The number of human subjects who marked all the models in this larger set is 16. Some of the models are standard models that are widely used in 3D shape research; and they have been used as test objects by researchers working on the best view problem. We have compared five 3D Interest Point Detection algorithms. The interest points detected on the 3D models of the dataset can be downloaded from the link below. Please refer to README for details in the download. Mesh saliency [Lee et al. 2005] : Interest points by mesh saliency Salient points [Castellani et al. 2008] : Interest points by salient points 3D-Harris [Sipiran and Bustos, 2010] : Interest points by 3D-Harris 3D-SIFT [Godil and Wagan, 2011] : Interest points by 3D-SIFT (Please note that some models in the dataset are not watertight, hence their volumetric representations could not be generated. Therefore, 3D-SIFT algorithm wasn't able to detect interest points for those models.) Scale-dependent corners [Novatnack and Nishino, 2007] : Interest points by SD corners HKS-based interest points [Sun et al. 2009] : Interest points by HKS method Please Cite the Paper: Helin Dutagaci, Chun Pan Cheung, Afzal Godil, ?Evaluation of 3D interest point detection techniques via human-generated ground truth?, The Visual Computer, 2012. References: [Lee et al. 2005] Lee, C.H., Varshney, A., Jacobs, D.W.: Mesh saliency. In: ACM SIGGRAPH 2005, pp. 659?666 (2005) [Castellani et al. 2008] Castellani, U., Cristani, M., Fantoni, S., Murino, V.: Sparse points matching by combining 3D mesh saliency with statistical descriptors. Comput. Graph. Forum 27(2), 643?652 (2008) [Sipiran and Bustos, 2010] Sipiran, I., Bustos, B.: A robust 3D interest points detector based on Harris operator. In: Eurographics 2010 Workshop on 3D Object Retrieval (3DOR?10), pp. 7?14 (2010) [Godil and Wagan, 2011] Godil, A., Wagan, A.I.: Salient local 3D features for 3D shape retrieval. In: 3D Image Processing (3DIP) and Applications II, SPIE (2011) [Novatnack and Nishino, 2007] Novatnack, J., Nishino, K.: Scale-dependent 3D geometric features. In: ICCV, pp. 1?8, (2007) [Sun et al. 2009] Sun, J., Ovsjanikov, M., Guibas, L.: A concise and provably informative multi-scale signature based on heat diffusion. In: Eurographics Symposium on Geometry Processing (SGP), pp. 1383?1392 (2009)

  19. a

    Math Index by Models Model

    • artificialanalysis.ai
    Updated May 15, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Artificial Analysis (2025). Math Index by Models Model [Dataset]. https://artificialanalysis.ai/models
    Explore at:
    Dataset updated
    May 15, 2025
    Dataset authored and provided by
    Artificial Analysis
    Description

    Comparison of Represents the average of math benchmarks in the Artificial Analysis Intelligence Index (AIME 2024 & Math-500) by Model

  20. Performance of DeepSeek's Janus compared to similar models in image...

    • statista.com
    Updated Jan 30, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2025). Performance of DeepSeek's Janus compared to similar models in image benchmarks 2025 [Dataset]. https://www.statista.com/statistics/1552920/deepseek-image-generation-performance-of-janus-compared-to-similar-models-by-benchmark/
    Explore at:
    Dataset updated
    Jan 30, 2025
    Dataset authored and provided by
    Statistahttp://statista.com/
    Time period covered
    Jan 2025
    Area covered
    China
    Description

    In a benchmark comparison, DeepSeek's Janus-Pro-7B model outperforms similar models in the GenEval benchmark and scores comparable results in the DPC-Bench. The company developed a large language model and a text-to-image model that achieves similar results to industry leaders, such as DALL-E and Stable Diffusion.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Princeton University (2025). Benchmark Model for Wastewater Treatment Using an Activated Sludge Process [Dataset]. https://catalog.data.gov/dataset/benchmark-model-for-wastewater-treatment-using-an-activated-sludge-process-22157

Data from: Benchmark Model for Wastewater Treatment Using an Activated Sludge Process

Related Article
Explore at:
Dataset updated
Jan 20, 2025
Dataset provided by
Princeton University
Description

This is benchmark model for wastewater treatment using an activated sludge process. The activated sludge process is a means of treating both municipal and industrial wastewater. The activated sludge process is a multi-chamber reactor unit that uses highly concentrated microorganisms to degrade organics and remove nutrients from wastewater, producing quality effluent. This model provides pollutant concentrations, mass balance, electricity requirements, and treatment costs. This model will be continuously updated based on the latest data.

Search
Clear search
Close search
Google apps
Main menu