100+ datasets found
  1. D

    PDEBench Datasets

    • darus.uni-stuttgart.de
    • opendatalab.com
    Updated Feb 13, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Makoto Takamoto; Timothy Praditia; Raphael Leiteritz; Dan MacKinlay; Francesco Alesiani; Dirk Pflüger; Mathias Niepert (2024). PDEBench Datasets [Dataset]. http://doi.org/10.18419/DARUS-2986
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Feb 13, 2024
    Dataset provided by
    DaRUS
    Authors
    Makoto Takamoto; Timothy Praditia; Raphael Leiteritz; Dan MacKinlay; Francesco Alesiani; Dirk Pflüger; Mathias Niepert
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Dataset funded by
    DFG
    Description

    This dataset contains benchmark data, generated with numerical simulation based on different PDEs, namely 1D advection, 1D Burgers', 1D and 2D diffusion-reaction, 1D diffusion-sorption, 1D, 2D, and 3D compressible Navier-Stokes, 2D Darcy flow, and 2D shallow water equation. This dataset is intended to progress the scientific ML research area. In general, the data are stored in HDF5 format, with the array dimensions packed according to the convention [b,t,x1,...,xd,v], where b is the batch size (i.e. number of samples), t is the time dimension, x1,...,xd are the spatial dimensions, and v is the number of channels (i.e. number of variables of interest). More detailed information are also provided in our Github repository (https://github.com/pdebench/PDEBench) and our submitting paper to NeurIPS 2022 Benchmark track.

  2. D

    Data Management of a Biotechnology Network as a Contribution to FAIR Data...

    • darus.uni-stuttgart.de
    Updated Dec 16, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Martina Rehnert; Ralf Takors (2022). Data Management of a Biotechnology Network as a Contribution to FAIR Data Mangement [Dataset]. http://doi.org/10.18419/DARUS-829
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Dec 16, 2022
    Dataset provided by
    DaRUS
    Authors
    Martina Rehnert; Ralf Takors
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Dataset funded by
    DFG
    Description

    The Research Data Management Plan (RDMP) of the priority program SPP 2170 is the formal document that should help to mangage the handling of data. Since enormous amounts of research data (Big Data) will be generated, the exchange and access to the data should be ensured. Every experiment in the laboratory, or every simulation generates huge amounts of unstructured data. To make these findable, accessible, interoperable, and reusable (FAIR), discipline-specific criteria must be defined in addition to the hardware and software that form the general platform. Therefore the RDMP of the DFG-funded priority program SPP2170 describes how this information could be processed in the future.

  3. D

    Movies of thin film water on NaCl(100) surface

    • darus.uni-stuttgart.de
    • search.nfdi4chem.de
    Updated Mar 25, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Simon Gravelle (2022). Movies of thin film water on NaCl(100) surface [Dataset]. http://doi.org/10.18419/DARUS-2697
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Mar 25, 2022
    Dataset provided by
    DaRUS
    Authors
    Simon Gravelle
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Dataset funded by
    DFG
    Description

    Videos showing water molecules at a sodium chloride (NaCl) solid surface for different water content. The force field for the water is TIP4P/epsilon (https://doi.org/10.1021/jp410865y), and the force field for the ions is from Loche et al. (https://doi.org/10.1021/acs.jpcb.1c05303). The trajectories have been generated using the GROMACS simulation package, and the videos have been created using VMD.

  4. D

    COLife_02 - Gigamap and Game Design

    • darus.uni-stuttgart.de
    Updated Feb 27, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Marie Davidova; Hananeh Behnam; María Claudia Valverde Rojas; Cristina Guerriero; Hsuan Yeh; Jianing Huang; Mahir Köse (2024). COLife_02 - Gigamap and Game Design [Dataset]. http://doi.org/10.18419/DARUS-3985
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Feb 27, 2024
    Dataset provided by
    DaRUS
    Authors
    Marie Davidova; Hananeh Behnam; María Claudia Valverde Rojas; Cristina Guerriero; Hsuan Yeh; Jianing Huang; Mahir Köse
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Dataset funded by
    DFG
    Description

    The dataset covers a gigamapping of game design and execution of the game GoCOLife that introduces a more-than-human perspective to the players. The data were produced as part of a studio course 'COLife: More-than-Human Perspective to CoDesign' in the summer semester 2024.

  5. D

    DP-FR-1924-MODLTW Polar Data Re15E5

    • darus.uni-stuttgart.de
    • windlab.hlrs.de
    Updated Oct 2, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dominic Seyfert (2024). DP-FR-1924-MODLTW Polar Data Re15E5 [Dataset]. http://doi.org/10.18419/DARUS-4491
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Oct 2, 2024
    Dataset provided by
    DaRUS
    Authors
    Dominic Seyfert
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Dataset funded by
    DFG
    Description

    Polar measurements of the DP-FR-1924-MODLWT airfoil. Data recorded at the Laminar Wind Tunnel, University of Stuttgart. For details on the measurements, visit https://www.iag.uni-stuttgart.de/en/institute/test-facilities/laminar-wind-tunnel/

  6. D

    Data from: Object model data sets of the case study specimens for the...

    • darus.uni-stuttgart.de
    • windlab.hlrs.de
    Updated Mar 15, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Marta Gil Pérez; Christoph Zechmeister; Fabian Kannenberg; Pascal Mindermann; Laura Balangé; Yanan Guo; Sebastian Hügle; Andreas Gienger; David Forster; Manfred Bischoff; Cristina Tarín; Peter Middendorf; Volker Schwieger; Götz Theodor Gresser; Achim Menges; Jan Knippers (2023). Object model data sets of the case study specimens for the computational co-design framework for coreless wound fibre-polymer composite structures [Dataset]. http://doi.org/10.18419/DARUS-3375
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Mar 15, 2023
    Dataset provided by
    DaRUS
    Authors
    Marta Gil Pérez; Christoph Zechmeister; Fabian Kannenberg; Pascal Mindermann; Laura Balangé; Yanan Guo; Sebastian Hügle; Andreas Gienger; David Forster; Manfred Bischoff; Cristina Tarín; Peter Middendorf; Volker Schwieger; Götz Theodor Gresser; Achim Menges; Jan Knippers
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Dataset funded by
    DFG
    Description

    This repository contains the object model data sets of the case study specimens from the related publication: Gil Pérez, M., Zechmeister, C., Kannenberg, F., Mindermann, P., Balangé, L., Guo, Y., Hügle, S., Gienger, A., Forster, D., Bischoff, M., Tarín, C., Middendorf, P., Schwieger, V., Gresser, G. T., Menges, A., Knippers, J.: 2022, Computational co-design framework for coreless wound fibre-polymer composite structures. Journal of Computational Design and Engineering, 9(2), pp. 310-329. (doi: 10.1093/jcde/qwab081) Abstract: In coreless filament winding, resin-impregnated fibre filaments are wound around anchor points without an additional mould. The final geometry of the produced part results from the interaction of fibres in space and is initially undetermined. Therefore, the success of large-scale coreless wound fibre composite structures for architectural applications relies on the reciprocal collaboration of simulation, fabrication, quality evaluation, and data integration domains. The correlation of data from those domains enables the optimization of the design towards ideal performance and material efficiency. This paper elaborates on a computational co-design framework to enable new modes of collaboration for coreless wound fibre–polymer composite structures. It introduces the use of a shared object model acting as a central data repository that facilitates interdisciplinary data exchange and the investigation of correlations between domains. The application of the developed computational co-design framework is demonstrated in a case study in which the data are successfully mapped, linked, and analysed across the different fields of expertise. The results showcase the framework’s potential to gain a deeper understanding of large-scale coreless wound filament structures and their fabrication and geometrical implications for design optimization. The data set contains data from three sets of ten coreless filament wound fiber-polymer composite testing specimens that were digitally simulated, robotically manufactured, laser scanned and mechanically tested. The data contains contributions from different domains: simulation, fabrication, quality evaluation, and data integration. The data was stored via an open-source object model (BHoM) that was extended for its use in coreless filament winding. "The BHoM (Buildings and Habitats object Model) is a collaborative computational development project for the built environment. BHoM aims to standardise the data and functionality that AEC domain experts use to design across all disciplines." (Source: https://bhom.xyz/documentation/) The files are named by specimen type and ID (SX-Y.json) - X refers to the specimen type and Y refers to the specimen ID.

  7. D

    Replication Data for: Rayleigh invariance allows the estimation of effective...

    • darus.uni-stuttgart.de
    Updated Jan 14, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Leon Keim; Holger Class (2025). Replication Data for: Rayleigh invariance allows the estimation of effective CO2 fluxes due to convective dissolution into water-filled fractures [Dataset]. http://doi.org/10.18419/DARUS-4143
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jan 14, 2025
    Dataset provided by
    DaRUS
    Authors
    Leon Keim; Holger Class
    License

    https://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/2.0/customlicense?persistentId=doi:10.18419/DARUS-4143https://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/2.0/customlicense?persistentId=doi:10.18419/DARUS-4143

    Dataset funded by
    DFG
    Description

    This dataset features both data and code related to the research article titled "Rayleigh Invariance Enables Estimation of Effective CO2 Fluxes Resulting from Convective Dissolution in Water-Filled Fractures." It includes raw data packaged in tarball format, including Python scripts used to derive the results presented in the publication. High-resolution raw data for contour plots is available upon request. 1 Download the Dataset: Download the dataset file using Access Dataset. Ensure you have sufficient disk space available for storing and processing the dataset. 2 Extract the Dataset: Once the dataset file is downloaded, extract its contents. The dataset is compressed in a tar.xz format. Use appropriate tools to extract it. For example, in Linux, you can use the following command: tar -xf Publication_CCS.tar.xz tar -xf Publication_Karst.tar.xz tar -xf Validation_Sim.tar.xz This will create a directory containing the dataset files. 3 Install Required Python Packages: Before running any code, ensure you have the necessary Python (version 3.10 tested) packages installed. The required packages and their versions are listed in the requirements.txt file. You can install the required packages using pip: pip install -r requirements.txt 4 Run the Post Processing Script: After extracting the dataset and installing the required Python packages, you can run the provided post processing script. The post processing script (post_process.py) is designed to replicate all the plots from a publication based on the dataset. Execute the script using Python: python3 post_process.py This script will generate the plots and output them to the specified directory. 5 Explore and Analyze: Once the script has completed running, you can explore the generated plots to gain insights from the dataset. Feel free to modify the script or use the dataset in your own analysis and experiments. High-resolution data, such as the vtu's for contour plots is available upon request; please feel free to reach out if needed. 6 Small Grid Study: There is a tarball for the data that was generated to study the grid used in the related publication. tar -xf Publication_CCS.tar.xz If you unpack the tarball and have the requirements from above installed, you can use the python script to generate the plots. 7 Citation: If you use this dataset in your research or publication, please cite the original source appropriately to give credit to the authors and contributors.

  8. D

    Measured hydrometeorologic data

    • darus.uni-stuttgart.de
    • windlab.hlrs.de
    Updated Aug 12, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Elisabeth Nißler; Claus Haslauer (2023). Measured hydrometeorologic data [Dataset]. http://doi.org/10.18419/DARUS-3554
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Aug 12, 2023
    Dataset provided by
    DaRUS
    Authors
    Elisabeth Nißler; Claus Haslauer
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Solving the energy balance at the atmosphere-subsurface interface drives heat input (in the summer) into the subsurface. We use this subsequently to calculate heat transport and water flow into the subsurface and then to calculate temperature s around drinking-water supply pipes. This data is from the weather station of the University of Stuttgart. We are providing the measured Boundary Conditions, needed to compute the interface boundary conditions: long wave radiation incoming short wave radiation incoming air temperature in 2 m above ground wind velocity in 2 m above ground relative humidity in 2 m above ground precipitation intensity Data is given tabulated, a readme-file explains the column names.

  9. D

    First Steps with DaRUS

    • darus.uni-stuttgart.de
    Updated Sep 7, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    FoKUS (2019). First Steps with DaRUS [Dataset]. http://doi.org/10.18419/DARUS-444
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Sep 7, 2019
    Dataset provided by
    DaRUS
    Authors
    FoKUS
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Instructions for the first steps with DaRUS

  10. D

    Data for: Optimized Sequences for Nonlinearity Estimation and...

    • darus.uni-stuttgart.de
    Updated Feb 19, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ephraim Fuchs; Thomas Handte; Daniel Verenzuela; Stephan Ten Brink (2025). Data for: Optimized Sequences for Nonlinearity Estimation and Self-Interference Cancellation [Dataset]. http://doi.org/10.18419/DARUS-3591
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Feb 19, 2025
    Dataset provided by
    DaRUS
    Authors
    Ephraim Fuchs; Thomas Handte; Daniel Verenzuela; Stephan Ten Brink
    License

    Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
    License information was derived automatically

    Dataset funded by
    Sony Europe B.V.
    Description

    The dataset offers sequences that are optimized for nonlinearity estimation. The sequences are compliant to IEEE 802.11 standards and are given as binary phase shift keying (BPSK) modulated orthogonal frequency division multiplexing (OFDM) symbol in frequency domain. The sequences are given for various numbers of total subcarriers and positions of occupied subcarriers that match to the training fields defined in the IEEE 802.11 standards. All sequences are normalized to unit power and comply to a maximum peak-to-average-power (PAPR) constraint of 13.05 dB. For each sequence format, one CSV file is given. The columns hold optimized sequences for nonlinearity estimation with orthonormal Laguerre polynomials for each estimation order from P=3 up to P=10.

  11. D

    Collective Robotic Construction (CRC) Research Projects organized by...

    • darus.uni-stuttgart.de
    Updated Feb 10, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Samuel Leder; Achim Menges (2025). Collective Robotic Construction (CRC) Research Projects organized by Architectural Design Approach [Dataset]. http://doi.org/10.18419/DARUS-4731
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Feb 10, 2025
    Dataset provided by
    DaRUS
    Authors
    Samuel Leder; Achim Menges
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Dataset funded by
    DFG
    Description

    This dataset contains the results of a database search to obtain research articles related to collective robotic construction (CRC). The database search criteria can be found in the related publication: Leder, S., Menges, A.: 2023, Architectural design in collective robotic construction. Automation in Construction, Vol. 156, p. 105082. (DOI: 10.1016/j.autcon.2023.105082). The found research articles are sorted by their relevance to the research topic of CRC and then if applicable, are categorized by the three dimension discussed in the paper, (1) design description, (2) goal specification, and (3) execution. The question that define the categorizaton at each dimension are as follows: Is the architectural design known before construction? How is architectural design communicated to the robots? When is the robotic planning of the architectural design executed?

  12. D

    Replication Code for: Rayleigh invariance allows the estimation of effective...

    • darus.uni-stuttgart.de
    Updated Apr 18, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Leon Keim; Holger Class (2024). Replication Code for: Rayleigh invariance allows the estimation of effective CO2 fluxes due to convective dissolution into water-filled fractures [Dataset]. http://doi.org/10.18419/DARUS-4089
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Apr 18, 2024
    Dataset provided by
    DaRUS
    Authors
    Leon Keim; Holger Class
    License

    https://www.gnu.org/licenses/gpl-3.0-standalone.htmlhttps://www.gnu.org/licenses/gpl-3.0-standalone.html

    Dataset funded by
    DFG
    Description

    This dataset consists of software code associated with the publication titled "Rayleigh Invariance Enables Estimation of Effective CO2 Fluxes Resulting from Convective Dissolution in Water-Filled Fractures." It includes a Dockerimage that contains the precompiled code for immediate use. For transparency, the Dockerfile is also provided. 1 Download the Dataset: Download the compressed Dockerimage wrr_image.tar directly. If you want to inspect the Dockerimage, you can have a look at the associated Dockerfile first. Inside the Dockerfile one will find an instance of git which is privately hosted and not guaranteed to be hosted forever. Source code can also be inspected inside the docker container. 2 Load Docker Image: Load the Docker image from the provided tar.xz file. docker load --input wrr_image.tar 3 Run Docker Container: Run the Docker container with appropriate volume mounts. docker run -v $(pwd)/share/:/home/wrr_user/code/simulations/run/customBoussinesq/share -it wrr_image It might be that the image is called slightly differently for instance wrr_image:latest , one needs to check the terminal output. This command mounts the share directory from your current host directory into the container's /home/wrr_user/code/simulations/run/customBoussinesq/share directory. This allows you to move simulation results outside of the container by moving the results into the share folder. 4 Running Computations: The container is precompiled with necessary resources. You can either submit bash scripts to the cluster scheduler or use the Allrun scripts inside the cases. Move to the desired case and type ./Allrun 4 to run with 4 cores. Note that the computations are resource-intensive and may not work on a local machine, even with an appropriate number of cores set.

  13. D

    Data from: Design of On-body Tactile Displays to Enhance Situation Awareness...

    • darus.uni-stuttgart.de
    Updated May 2, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Francesco Chiossi; Steeven Villa; Melanie Hauser; Robin Welsch; Lewis Chuang (2022). Design of On-body Tactile Displays to Enhance Situation Awareness in Automated Vehicles [Dataset]. http://doi.org/10.18419/DARUS-2824
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 2, 2022
    Dataset provided by
    DaRUS
    Authors
    Francesco Chiossi; Steeven Villa; Melanie Hauser; Robin Welsch; Lewis Chuang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Dataset funded by
    DFG
    Description

    Fatalities with semi-automated vehicles typically occur when users are engaged in non-driving related tasks (NDRTs) that compromise their situational awareness (SA). This work developed a tactile display for on-body notification to support situational awareness, thus enabling users to recognize vehicle automation failures and intervene if necessary. We investigated whether such tactile notifications support "event detection'' (SA-L1) or 2 "anticipation'" (SA-L3). Using a simulated automated driving scenario, a between-groups study contrasted SA-L1 and SA-L3 tactile notifications that respectively displayed the spatial positions of surrounding traffic or future projection of the automated vehicle's position. Our participants were engaged in an NDRT, i.e., an Operation Span Task that engaged visual working memory (WM) resources. They were instructed to intervene if the tactile display contradicted the driving scenario, thus indicating vehicle sensing failures. On a single critical trial, we introduced a failure that could have resulted in a vehicle collision. SA-L1 tactile displays of potential collision targets resulted in less subjective workload on the NDRT than SA-L3, which indicated the vehicle's future actions. These findings and qualitative questionnaire suggest that the simplicity of SA-L1 display required less mental resources, which allowed participants to better interpret sensing failures in vehicle automation. We make available data on intervention performance (distance, Maximum intensity, Time to Collision), WM performance (Attention and WM interference), qualitative questionnaire (NASA-TLX and SART), together with subjective questions from the semistructured interview and Unity VR environment.

  14. D

    Measurements of soil temperatures and moisture content

    • darus.uni-stuttgart.de
    Updated Aug 12, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Elisabeth Nißler; Claus Haslauer (2023). Measurements of soil temperatures and moisture content [Dataset]. http://doi.org/10.18419/DARUS-3555
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Aug 12, 2023
    Dataset provided by
    DaRUS
    Authors
    Elisabeth Nißler; Claus Haslauer
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    These measurements are taken in the subsurface at the pilot site next to the weather station of the University of Stuttgart and used to calibrate and validate our pde-based model. The subsurface has been instrumented with 64 temperature sensors, 8 soil moisture sensors. There are four locations, having different soil and soil cover layers. Soil moisture is measured at 60 cm and 100 cm depth, Temperature at 30, 60, 75, 100 cm. At drinking water pipe location, there are two sensors. Column description is to be found in a readme.txt file

  15. D

    Synchrotron X-ray video of full-penetration laser welding of aluminum...

    • darus.uni-stuttgart.de
    Updated Jun 4, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jonas Wagner; Christian Hagenlocher; Rudolf Weber; Marc Hummel; Felix Beckmann; Julian Moosmann; Thomas Graf (2024). Synchrotron X-ray video of full-penetration laser welding of aluminum AA1050A [Dataset]. http://doi.org/10.18419/DARUS-3860
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jun 4, 2024
    Dataset provided by
    DaRUS
    Authors
    Jonas Wagner; Christian Hagenlocher; Rudolf Weber; Marc Hummel; Felix Beckmann; Julian Moosmann; Thomas Graf
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Dataset funded by
    DFG
    Description

    The video shows a synchrotron x-ray video of full-penetration laser welding of the aluminum alloy AA1050A (Al99.5). At the beginning of the video the transition from partial penetration welding with a keyhole which is closed at its bottom to full-penetration welding with a keyhole which is opened at its bottom can be observed. The images show that the fluctuations of the capillary’s geometry during the beginning of the process result in an excessive formation of pores. During the further progress of the process a reliable full-penetration process is achieved with an increased stability of the geometry of the keyhole. A detailed analysis of this transition and its implications on the absorptance are presented in the related publication.

  16. D

    Docker Container to setup modelling environment and create results for a...

    • darus.uni-stuttgart.de
    • windlab.hlrs.de
    Updated Dec 7, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Elisabeth Nißler; Claus Haslauer (2023). Docker Container to setup modelling environment and create results for a 1-D-model [Dataset]. http://doi.org/10.18419/DARUS-3552
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Dec 7, 2023
    Dataset provided by
    DaRUS
    Authors
    Elisabeth Nißler; Claus Haslauer
    License

    https://www.gnu.org/licenses/gpl-3.0-standalone.htmlhttps://www.gnu.org/licenses/gpl-3.0-standalone.html

    Description

    The Dumux source code is provided in a docker container, which compiles and produces an executable, which models heat transport and water flow from the atmosphere to the subsurface. The needed boundary and initial conditions for four locations, modeled in our paper "Vadose Zone Journal Submission VZJ-2023-06-0046-OA" are provided and can be calculated.

  17. D

    SFB/Transregio 161 Data Management Plan 2023-2027

    • darus.uni-stuttgart.de
    Updated Aug 20, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Christoph Müller (2024). SFB/Transregio 161 Data Management Plan 2023-2027 [Dataset]. http://doi.org/10.18419/DARUS-4452
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Aug 20, 2024
    Dataset provided by
    DaRUS
    Authors
    Christoph Müller
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Dataset funded by
    DFG
    Description

    The participating universities in SFB/Transregio 161 acknowledge the general importance of re-search data management as a vital issue for all of their work and provide increasing central sup-port for long-term accessibility and reusability of data, documentation of methods and tools and privacy protection. However, technical and organisational offerings for data management can only be effective as far as researchers are aware of them and can make informed decisions on which solution is the right one for which data, which in turn requires an overall knowledge of the types and amounts of data collected or produced in the project. Furthermore, ensuring long-term availability of data for reproducibility of research results, which is one of the key ideas be-hind the research efforts of SFB/Transregio 161 and even more important in the final funding period of the project, requires planful actions as serving data beyond the lifetime of a project involves allocation of technical and organisation resources. This data management plan (DMP) identifies the research data collected or produced in SFB/Transregio 161, assesses their value for the goals of reusability and reproducibility and states the timeframes in which SFB/Transregio 161 and/or the participating universities guarantee their future availability. It furthermore describes the technical and organisational means for storing and publishing research data which the researchers of SFB/Transregio 161 can make use of. Such means might be provided by the project and funded by DFG or by the universities. The DMP states minimum requirements for metadata that must be provided for research data, briefly ad-dresses privacy and ethics issues and provides the researchers with guidelines for GPDR-compliant handling of personal data which might be collected in the projects. The DMP is a living document and therefore will be updated as necessary.

  18. D

    Profile hidden Markov model for PETase homologues

    • darus.uni-stuttgart.de
    Updated Dec 1, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Patrick C. F. Buchholz (2021). Profile hidden Markov model for PETase homologues [Dataset]. http://doi.org/10.18419/DARUS-2055
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Dec 1, 2021
    Dataset provided by
    DaRUS
    Authors
    Patrick C. F. Buchholz
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Sixteen protein sequences for enzymes with known activity against polyethylene terephthalate (PET) were clustered using CD-HIT to derive a reduced set of twelve centroid sequences. These twelve protein sequences were aligned in a structure-guided multiple sequence alignment by T-COFFEE. A profile hidden Markov model (HMM) was derived from this multiple sequence alignment by HMMER.

  19. D

    Models and Prepared Datasets for the Second Stage

    • darus.uni-stuttgart.de
    Updated Apr 18, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Julia Pelzer (2024). Models and Prepared Datasets for the Second Stage [Dataset]. http://doi.org/10.18419/DARUS-3689
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Apr 18, 2024
    Dataset provided by
    DaRUS
    Authors
    Julia Pelzer
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Dataset funded by
    DFG
    Description

    Models trained with Heat Plume Prediction and datasets prepared with Heat Plume Prediction into reasonable format + normalization etc, used for training these models. Last relevant git commit: 5d6c5eae5b00e438. Based on raw data from doi:darus-3651 and doi:darus-3652.

  20. D

    Optical Microscopy and pressure measurements of Enzymatically Induced...

    • darus.uni-stuttgart.de
    Updated Feb 10, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Felix Weinhardt; Holger Class; Samaneh Vahid Dastjerdi; Nikolaos Karadimitriou; Dongwon Lee; Holger Steeb (2021). Optical Microscopy and pressure measurements of Enzymatically Induced Calcite Precipitation (EICP) in a microfluidic cell [Dataset]. http://doi.org/10.18419/DARUS-818
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Feb 10, 2021
    Dataset provided by
    DaRUS
    Authors
    Felix Weinhardt; Holger Class; Samaneh Vahid Dastjerdi; Nikolaos Karadimitriou; Dongwon Lee; Holger Steeb
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Dataset funded by
    Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)
    Description

    Content: This dataset includes raw as well as processed data from three experiments (Exp 1 - 3). Each dataset consists of the readouts from the pressure sensor(s), as logged with the use of QmixElements, raw images, and segmented images. Log files: Data in the log files are essentially the raw output of the software QMixElements which is used to operate the syringe pumps and log the pressure sensors. It consists of the columns time, inlet pressure, outlet pressure as well as the flow rates of the two syringes. We need to point out that each pressure sensor has an offset which needs to be accounted for in order to account for the actual pressure in the most accurate way. In order to derive the correct pressure drop (Δp) across the flow cell (outlet pressure - inlet pressure), a two minute period of no flow is applied. It is expected that in the absence of flow the pressure drop across the cell has to be zero and, therefore, the offset of the pressure drop can be determined. For each experiment there are separate log files for stage a) initial permeability measurement (Exp[Nr.]_PermInitial) stage b) continuous injection of reactive solution (Exp[Nr.]_ParallelInjection) stage c) final permeability measurement (Exp[Nr.]_PermFinal) In Experiment 2 the final permeability measurement was not conducted, since the cell was clogged, thus the corresponding log file is missing. Note: When estimating the permeability reduction over time during stage c), e.g K/K0 = Δp0/Δp(t) , the initial pressure drop (Δp0) is very crucial. Small variations and uncertainties effect the estimated permeability reduction tremendously. Due to the small flow rates in the experiments, the measured initial pressure drop is very small compared to the pressure range of the sensors and falls within the error limit of the sensor itself, casting the measurements quite uncertain. Therefore, it is recommended to back calculate the initial pressure drop based on the initial permeability measurement of stage a) and the flow rate of stage c). Raw images: Images taken by optical microscopy are given in the folder rawImages. These are synchronized with the pressure measurements found in the log files from QMixElements, which is realized by the following naming convention of each image: Exp[Nr.]_[timestamp in sec]. The timestamp in seconds corresponds to the time in the log file. While in Experiment 1 and 2 images were captured at 0.1 frames per second (fps), in Experiment 3 the frame rate was set to 1 fps. In Experiment 3 some images got dumped during the recording. The physical resolution of the images are: 3.34 μm/pixel (Experiment 1), 3.36 μm/pixel (Experiment 2) and 3.17 μm/pixel (Experiment 3). These were calculated based on the known length of the domain divided by the number of pixels. Segmented images: The segmented images are saved in the folder processedImages. Not all of the raw images were processed: The first processed image has the timestamp of 420 seconds and corresponds to the time when the continuous injection of the reactant solution started. One image every five minutes were processed. These images were also flipped vertically compared to the raw images.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Makoto Takamoto; Timothy Praditia; Raphael Leiteritz; Dan MacKinlay; Francesco Alesiani; Dirk Pflüger; Mathias Niepert (2024). PDEBench Datasets [Dataset]. http://doi.org/10.18419/DARUS-2986

PDEBench Datasets

Related Article
Explore at:
221 scholarly articles cite this dataset (View in Google Scholar)
CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
Dataset updated
Feb 13, 2024
Dataset provided by
DaRUS
Authors
Makoto Takamoto; Timothy Praditia; Raphael Leiteritz; Dan MacKinlay; Francesco Alesiani; Dirk Pflüger; Mathias Niepert
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Dataset funded by
DFG
Description

This dataset contains benchmark data, generated with numerical simulation based on different PDEs, namely 1D advection, 1D Burgers', 1D and 2D diffusion-reaction, 1D diffusion-sorption, 1D, 2D, and 3D compressible Navier-Stokes, 2D Darcy flow, and 2D shallow water equation. This dataset is intended to progress the scientific ML research area. In general, the data are stored in HDF5 format, with the array dimensions packed according to the convention [b,t,x1,...,xd,v], where b is the batch size (i.e. number of samples), t is the time dimension, x1,...,xd are the spatial dimensions, and v is the number of channels (i.e. number of variables of interest). More detailed information are also provided in our Github repository (https://github.com/pdebench/PDEBench) and our submitting paper to NeurIPS 2022 Benchmark track.

Search
Clear search
Close search
Google apps
Main menu