Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains benchmark data, generated with numerical simulation based on different PDEs, namely 1D advection, 1D Burgers', 1D and 2D diffusion-reaction, 1D diffusion-sorption, 1D, 2D, and 3D compressible Navier-Stokes, 2D Darcy flow, and 2D shallow water equation. This dataset is intended to progress the scientific ML research area. In general, the data are stored in HDF5 format, with the array dimensions packed according to the convention [b,t,x1,...,xd,v], where b is the batch size (i.e. number of samples), t is the time dimension, x1,...,xd are the spatial dimensions, and v is the number of channels (i.e. number of variables of interest). More detailed information are also provided in our Github repository (https://github.com/pdebench/PDEBench) and our submitting paper to NeurIPS 2022 Benchmark track.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Research Data Management Plan (RDMP) of the priority program SPP 2170 is the formal document that should help to mangage the handling of data. Since enormous amounts of research data (Big Data) will be generated, the exchange and access to the data should be ensured. Every experiment in the laboratory, or every simulation generates huge amounts of unstructured data. To make these findable, accessible, interoperable, and reusable (FAIR), discipline-specific criteria must be defined in addition to the hardware and software that form the general platform. Therefore the RDMP of the DFG-funded priority program SPP2170 describes how this information could be processed in the future.
https://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/2.1/customlicense?persistentId=doi:10.18419/DARUS-3884https://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/2.1/customlicense?persistentId=doi:10.18419/DARUS-3884
Understanding the link between visual attention and user’s needs when visually exploring information visualisations is under-explored due to a lack of large and diverse datasets to facilitate these analyses. To fill this gap, we introduce SalChartQA - a novel crowd-sourced dataset that uses the BubbleView interface as a proxy for human gaze and a question-answering (QA) paradigm to induce different information needs in users. SalChartQA contains 74,340 answers to 6,000 questions on 3,000 visualisations. Informed by our analyses demonstrating the tight correlation between the question and visual saliency, we propose the first computational method to predict question-driven saliency on information visualisations. Our method outperforms state-of-the-art saliency models, improving several metrics, such as the correlation coefficient and the Kullback-Leibler divergence. These results show the importance of information needs for shaping attention behaviour and paving the way for new applications, such as task-driven optimisation of visualisations or explainable AI in chart question-answering. The files of this dataset are documented in README.md.
https://spdx.org/licenses/BSD-3-Clause.htmlhttps://spdx.org/licenses/BSD-3-Clause.html
This repository includes the data and the code of the (soon to be published) paper: "Bioinspired Morphology and Task Curricula for Learning Locomotion in Bipedal Muscle-Actuated Systems" by Nadine Badie, Firas Al-Hafez, Pierre Schumacher, Daniel F.B. Haeufle, Jan Peters, and Syn Schmitt. Please always cite the paper in combination with this dataset as it is not self-explanatory.
https://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/1.3/customlicense?persistentId=doi:10.18419/DARUS-2826https://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/1.3/customlicense?persistentId=doi:10.18419/DARUS-2826
Despite its importance for assessing the effectiveness of communicating information visually, fine-grained recallability of information visualisations has not been studied quantitatively so far. We propose a question-answering paradigm to study visualisation recallability and present VisRecall -- a novel dataset consisting of 200 information visualisations that are annotated with crowd-sourced human (N = 305) recallability scores obtained from 1,000 questions from five question types, which are related to titles, filtering information, finding extrema, retrieving values, and understanding visualisations. It aims to make fundamental contributions towards a new generation of methods to assist designers in optimising information visualisations. This dataset contains stimuli and collected participant data of VisRecall. The structure of the dataset is described in the README-File. Further, if you are interested in related codes of the publication, you can find a copy of the code repository (see Metadata for Research Software) within this dataset.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Satellite Altimetry-based Extension of global-scale in situ river discharge Measurements (SAEM) dataset provides a comprehensive solution for addressing gaps in river discharge measurements by leveraging satellite altimetry. This dataset offers enhanced coverage for river discharge estimations by utilizing data from multiple satellite missions and integrating it with existing river gauge networks. It supports sustainable development and helps address complex water-related challenges exacerbated by climate change. The first version of SAEM includes (1) height-based discharge estimates for 8,730 river gauges, covering approximately 88% of the total gauged discharge volume globally. These estimates demonstrate a median Kling-Gupta Efficiency (KGE) of 0.48, surpassing the performance of current global datasets. (2) Catalog of Virtual Stations (VSs): a catalog of VSs defined by specific criteria, including each station’s coordinates, associated satellite altimetry missions, distance to discharge gauges, and quality flags. (3) Altimetric Water Level Time Series: time series data of water levels from VSs that provide high-quality discharge estimates. The water level data are sourced from both existing Level-3 datasets and newly generated data within this study, including contributions from Hydroweb.Next, DAHITI, GRRATS, and HydroSat. Non-parametric quantile mapping functions: for VSs, which model the transformation of water level time series into discharge data using a Nonparametric Stochastic Quantile Mapping Function approach.
https://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.18419/DARUS-4099https://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.18419/DARUS-4099
Data and scripts for replicating results and the investigation presented in the paper. This includes the dft parameters for generating training data, all training and data selection scripts for the neural networks, scripts for running and analysing the production simulations with the trained potentials.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains the pretrained baseline models, namely FNO, U-Net, and PINN. These models are trained on different PDEs, such as 1D advection, 1D Burgers', 1D and 2D diffusion-reaction, 1D diffusion-sorption, 1D, 2D, and 3D compressible Navier-Stokes, 2D Darcy flow, and 2D shallow water equation. In addition the dataset contains the pre-trained model for the 1D Inverse problem for FNO and U-Net. These models are stored using the same structure as the dataset they trained on. All the files are saved in .pt files, the default file type for the PyTorch library. More detailed information are also provided in our Github repository (https://github.com/pdebench/PDEBench) and our submitting paper to NeurIPS 2022 Benchmark track.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
These measurements are taken in the subsurface at the pilot site next to the weather station of the University of Stuttgart and used to calibrate and validate our pde-based model. The subsurface has been instrumented with 64 temperature sensors, 8 soil moisture sensors. There are four locations, having different soil and soil cover layers. Soil moisture is measured at 60 cm and 100 cm depth, Temperature at 30, 60, 75, 100 cm. At drinking water pipe location, there are two sensors. Column description is to be found in a readme.txt file
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Fatalities with semi-automated vehicles typically occur when users are engaged in non-driving related tasks (NDRTs) that compromise their situational awareness (SA). This work developed a tactile display for on-body notification to support situational awareness, thus enabling users to recognize vehicle automation failures and intervene if necessary. We investigated whether such tactile notifications support "event detection'' (SA-L1) or 2 "anticipation'" (SA-L3). Using a simulated automated driving scenario, a between-groups study contrasted SA-L1 and SA-L3 tactile notifications that respectively displayed the spatial positions of surrounding traffic or future projection of the automated vehicle's position. Our participants were engaged in an NDRT, i.e., an Operation Span Task that engaged visual working memory (WM) resources. They were instructed to intervene if the tactile display contradicted the driving scenario, thus indicating vehicle sensing failures. On a single critical trial, we introduced a failure that could have resulted in a vehicle collision. SA-L1 tactile displays of potential collision targets resulted in less subjective workload on the NDRT than SA-L3, which indicated the vehicle's future actions. These findings and qualitative questionnaire suggest that the simplicity of SA-L1 display required less mental resources, which allowed participants to better interpret sensing failures in vehicle automation. We make available data on intervention performance (distance, Maximum intensity, Time to Collision), WM performance (Attention and WM interference), qualitative questionnaire (NASA-TLX and SART), together with subjective questions from the semistructured interview and Unity VR environment.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Solving the energy balance at the atmosphere-subsurface interface drives heat input (in the summer) into the subsurface. We use this subsequently to calculate heat transport and water flow into the subsurface and then to calculate temperature s around drinking-water supply pipes. This data is from the weather station of the University of Stuttgart. We are providing the measured Boundary Conditions, needed to compute the interface boundary conditions: long wave radiation incoming short wave radiation incoming air temperature in 2 m above ground wind velocity in 2 m above ground relative humidity in 2 m above ground precipitation intensity Data is given tabulated, a readme-file explains the column names.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
General A hardware prototype of a four-bar linkage was constructed to generate the presented data set. The data consists of desired input currents supplied to a servo motor and the measured resulting velocities. The mechanism is portrayed in the lab_mechanism_x.jpg images. Further details of the mechanism can be found in the section "Mechanism Setup". For each input trajectory in the input/ folder, the experiment was performed three times. The corresponding measurement files are in the output_empty/ and output_honey/ folders identified by the extension output_xx, where xx is either 00, 01, or 02. For the measurements in the output_honey/ folder, a non-symmetrical stirrer was mounted to the mechanism and was moved through regular supermarket forest honey introducing additional viscous damping into the system. This also allows to supply higher currents for relevant amounts of time to the motor because the maximal motor velocity will not be reached as soon. For the files in the output_empty/ folder, no stirrer was mounted on the mechanism. File Setup The input and output files are comma-separated text files. In the input files, the first line contains a column description (% Time [s], Prescribed Current [mA]) and the following lines indicate the input commands. In the input-command lines, the first value is a time marker in seconds, and the second value is a desired current that should be supplied to the motor from that time on until the time marker in the next line. The output files have a column description in the first line (% Time [s], Goal Current [mA], Present Current [mA], Present Voltage [V], Present Position [rad], Present Velocity [rad/s]) and the following lines are the measurements from the servo motor. It is important to note that the servo motor only has a granularity of 2.69 mA steps for the supplied currents. Hence, the goal current will be the closest multiple of 2.69 below the desired current in the input file. The present position is denoted in rad, where the null position is with the left link in a horizontal position (parallel to the ground link) pointing to the right. The motor then actuates this link in counter-clockwise direction when viewed from the top. Mechanism Setup The four-bar linkage consists of aluminum blocks connected by revolute joints. The joints of the three moving links are 10 mm apart from the edges of the aluminum blocks. The lengths of the moving links are the following (with joint distances denoted in brackets): left link / crank link: 50 mm (30 mm) top link / coupler link: 124 mm (104 mm) right link / rocker link: 80 mm (60 mm) The ground link can be freely adjusted between 45 mm and 120 mm, but was fixed to 95 mm in the conducted experiments. A stirrer can be mounted on the mechanism and can be moved through a liquid introducing viscous damping into the system. A Dynamixel XH430-W350-R servo motor actuates the left link. The servo motor has a built-in controller and can be supplied with a desired current signal to enforce a moment on the left link. The motor is controlled via a C++ program running under Ubuntu 20.04. The baud rate is set to the highest admissible value of 4.5 Mb/s and the USB latency is set to 1 ms. An accelerometer (Bosch BMA456) is mounted to the top of the mechanism but has not been used in the experiments. Python Notebook tutorial.ipynb This Python 3 notebook visualizes the trajectories to get an intuition about the presented data set. It exemplifies how to load and extract values from the input and output files. Afterwards, it plots the input trajectories together with corresponding velocity measurements.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains the object model data sets of the case study specimens from the related publication: Gil Pérez, M., Zechmeister, C., Kannenberg, F., Mindermann, P., Balangé, L., Guo, Y., Hügle, S., Gienger, A., Forster, D., Bischoff, M., Tarín, C., Middendorf, P., Schwieger, V., Gresser, G. T., Menges, A., Knippers, J.: 2022, Computational co-design framework for coreless wound fibre-polymer composite structures. Journal of Computational Design and Engineering, 9(2), pp. 310-329. (doi: 10.1093/jcde/qwab081) Abstract: In coreless filament winding, resin-impregnated fibre filaments are wound around anchor points without an additional mould. The final geometry of the produced part results from the interaction of fibres in space and is initially undetermined. Therefore, the success of large-scale coreless wound fibre composite structures for architectural applications relies on the reciprocal collaboration of simulation, fabrication, quality evaluation, and data integration domains. The correlation of data from those domains enables the optimization of the design towards ideal performance and material efficiency. This paper elaborates on a computational co-design framework to enable new modes of collaboration for coreless wound fibre–polymer composite structures. It introduces the use of a shared object model acting as a central data repository that facilitates interdisciplinary data exchange and the investigation of correlations between domains. The application of the developed computational co-design framework is demonstrated in a case study in which the data are successfully mapped, linked, and analysed across the different fields of expertise. The results showcase the framework’s potential to gain a deeper understanding of large-scale coreless wound filament structures and their fabrication and geometrical implications for design optimization. The data set contains data from three sets of ten coreless filament wound fiber-polymer composite testing specimens that were digitally simulated, robotically manufactured, laser scanned and mechanically tested. The data contains contributions from different domains: simulation, fabrication, quality evaluation, and data integration. The data was stored via an open-source object model (BHoM) that was extended for its use in coreless filament winding. "The BHoM (Buildings and Habitats object Model) is a collaborative computational development project for the built environment. BHoM aims to standardise the data and functionality that AEC domain experts use to design across all disciplines." (Source: https://bhom.xyz/documentation/) The files are named by specimen type and ID (SX-Y.json) - X refers to the specimen type and Y refers to the specimen ID.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Replication Data for an analysis of research articles from computational mechanics, all using the software Pasimodo https://www.itm.uni-stuttgart.de/en/software/pasimodo/ Categories for analysing the articles: To understand the context of the citation, we classified the citations into referring and reuse. Where referring is meant to be, when a citation is used in a document to refer to another article, but the content of that article is not used to contribute or support the work at hand. The term reuse refers to the use of data, concepts, or theories previously established by others to enhance, sustain, or build upon for the own work. It can also mean that the referenced article is used to set boundaries or differentiate the current work from that of the cited article. To gain a more profound understanding of these purposes of reuse of other research, we defined a further subdivision of the term reuse: adjustment, application, assumption and comparison. In this way, adjustment refers to an existing concept, combined with something new to refer to the broader concept or idea, while application and assumption focus more on details. The term application employs when the reference was used for an existing concept that was incorporated into the work itself in the form of formulas, models, theories, or parameters. Unlike assumption, where an existing concept was referenced, which was incorporated into the model without formulas. Comparison is used for an existing concept to compare or validate its results, methods, or assumptions. Overview_Articles_Research_Cases.csv contains the metadata about the analysed and a key to identify the articles. Evaluation_of_Articles_Research_Cases.csv contains further information about the scope of every article. Reuse_.csv contains the detailed analysis of the article (see method). Comparison_of_Articles_Research_Cases.csv includes data analysis across all Reuse_.csv files. in Reuse_in_Articles_Research_Cases.xlsx contains all files above.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The benchmark dataset was generated through a comprehensive simulation study of the deep drawing process for DP600 sheet metal, incorporating variations in geometry, material properties, and process parameters. The simulations were based on the deep drawing of modified quadratic cups with a length of 210 mm and a drawing depth of 30 mm. Three distinct base geometries - Concave, Convex, and Rectangular - were derived from a rectangular reference shape, with key geometric parameters varied in two increments (minimum and maximum). For each geometry, material and process parameters such as the hardening factor (MAT), friction coefficient (FC), sheet thickness (SHTK), and binder force (BF) were systematically varied, resulting in 32,076 unique simulations. Each simulation included stress, strain, thickness distribution, and nodal displacement data for the deep drawing and subsequent springback analysis. The simulation data were stored in HDF5 format, with metadata linking each dataset to its corresponding geometry, material, and process parameters. This structured format ensures efficient retrieval and processing of simulation results, facilitating further analysis and benchmarking.
https://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.18419/DARUS-3138https://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.18419/DARUS-3138
This dataset contains stimuli and collected participant data of VisRecall++. The structure of the dataset is described in the README-File. Further, if you are interested in related codes of the publication, you can find a copy of the code repository (see Metadata for Research Software) within this dataset.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
We investigated the effect of stimulus-question ordering and modality in which the question is presented of a user study with visual question answering (VQA) tasks. In an eye-tracking user study (N=13), we tested 5 conditions within-subjects. The conditions were counter-balanced to account for order effects. We collected participants' answers to the VQA tasks, responses to the NASA TLX questionnaire after each completed condition, and gaze data was recorded only during exposure to the image stimulus. We provide the data and scripts used for statistical analysis, the files used for the exploratory analysis in WebVETA, the image stimuli used per condition and training as well as the VQA tasks related to the images. The images and questions used in the user study is a subset of the GQA dataset (Hudson and Manning, 2019). For more information see: https://cs.stanford.edu/people/dorarad/gqa/index.html The mean fixation duration, hit-any-AOI rate and scan paths were generated using gazealytics (https://www2.visus.uni-stuttgart.de/gazealytics/). The Hit-Any-AOI-rate and mean fixation duration was calculated per person per image stimulus.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Videos showing water molecules at a sodium chloride (NaCl) solid surface for different water content. The force field for the water is TIP4P/epsilon (https://doi.org/10.1021/jp410865y), and the force field for the ions is from Loche et al. (https://doi.org/10.1021/acs.jpcb.1c05303). The trajectories have been generated using the GROMACS simulation package, and the videos have been created using VMD.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data set for the gathered user study data of the paper "ARound the Smartphone: Investigating the Effects of Virtually-Extended Display Size on Spatial Memory" (CHI'23). Paper Abstract: Smartphones conveniently place large information spaces in the palms of our hands. While research has shown that larger screens positively affect spatial memory, workload, and user experience, smartphones remain fairly compact for the sake of device ergonomics and portability. Thus, we investigate the use of hybrid user interfaces to virtually increase the available display size by complementing the smartphone with an augmented reality head-worn display. We thereby combine the benefits of familiar touch interaction with the near-infinite visual display space afforded by augmented reality. To better understand the potential of virtually-extended displays and the possible issues of splitting the user's visual attention between two screens (real and virtual), we conducted a within-subjects experiment with 24 participants completing navigation tasks using different virtually-augmented display sizes. Our findings reveal that a desktop monitor size represents a 'sweet spot' for extending smartphones with augmented reality, informing the design of hybrid user interfaces.
https://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/2.0/customlicense?persistentId=doi:10.18419/DARUS-4143https://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/2.0/customlicense?persistentId=doi:10.18419/DARUS-4143
This dataset features both data and code related to the research article titled "Rayleigh Invariance Enables Estimation of Effective CO2 Fluxes Resulting from Convective Dissolution in Water-Filled Fractures." It includes raw data packaged in tarball format, including Python scripts used to derive the results presented in the publication. High-resolution raw data for contour plots is available upon request. 1 Download the Dataset: Download the dataset file using Access Dataset. Ensure you have sufficient disk space available for storing and processing the dataset. 2 Extract the Dataset: Once the dataset file is downloaded, extract its contents. The dataset is compressed in a tar.xz format. Use appropriate tools to extract it. For example, in Linux, you can use the following command: tar -xf Publication_CCS.tar.xz tar -xf Publication_Karst.tar.xz tar -xf Validation_Sim.tar.xz This will create a directory containing the dataset files. 3 Install Required Python Packages: Before running any code, ensure you have the necessary Python (version 3.10 tested) packages installed. The required packages and their versions are listed in the requirements.txt file. You can install the required packages using pip: pip install -r requirements.txt 4 Run the Post Processing Script: After extracting the dataset and installing the required Python packages, you can run the provided post processing script. The post processing script (post_process.py) is designed to replicate all the plots from a publication based on the dataset. Execute the script using Python: python3 post_process.py This script will generate the plots and output them to the specified directory. 5 Explore and Analyze: Once the script has completed running, you can explore the generated plots to gain insights from the dataset. Feel free to modify the script or use the dataset in your own analysis and experiments. High-resolution data, such as the vtu's for contour plots is available upon request; please feel free to reach out if needed. 6 Small Grid Study: There is a tarball for the data that was generated to study the grid used in the related publication. tar -xf Publication_CCS.tar.xz If you unpack the tarball and have the requirements from above installed, you can use the python script to generate the plots. 7 Citation: If you use this dataset in your research or publication, please cite the original source appropriately to give credit to the authors and contributors.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains benchmark data, generated with numerical simulation based on different PDEs, namely 1D advection, 1D Burgers', 1D and 2D diffusion-reaction, 1D diffusion-sorption, 1D, 2D, and 3D compressible Navier-Stokes, 2D Darcy flow, and 2D shallow water equation. This dataset is intended to progress the scientific ML research area. In general, the data are stored in HDF5 format, with the array dimensions packed according to the convention [b,t,x1,...,xd,v], where b is the batch size (i.e. number of samples), t is the time dimension, x1,...,xd are the spatial dimensions, and v is the number of channels (i.e. number of variables of interest). More detailed information are also provided in our Github repository (https://github.com/pdebench/PDEBench) and our submitting paper to NeurIPS 2022 Benchmark track.