https://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/1.1/customlicense?persistentId=doi:10.18419/DARUS-3100https://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/1.1/customlicense?persistentId=doi:10.18419/DARUS-3100
The supplemental materials of the paper titled Comparative Evaluation of Bipartite, Node-Link, and Matrix-Based Network Representations, which was accepted for presentation at IEEE VIS 2022 conference. The structure of the folder is as follows: . └── code |── NetworkGeneration # the R code to generate the network data |── NetworkVis # the code that used to generate the visual stimuli |── BPStudy # the code to re-run the entire study |── BPStudy\stimuli # the visual stimuli used throughout the study └── StatisticalAnalysis # the code used to perform the statistical analysis └── data |── network_data # the network data files in json format └── stats_data # the raw data used for statistical analysis └── extra |── force-layout # The code and data for the force-layout experiment └── screen_shots.pdf # screenshots of study └── README.md # this file Please check the README.md file within each directory for further information on how to use/run the code.
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
In this repository the data files of OncoFEM are collected. These are in the following listed and cited where appropriate: SRI24 atlas T1 and T2 MRI modalities (T. Rohlfing, N. M. Zahr, E. V. Sullivan, and A. Pfefferbaum, “The SRI24 multichannel atlas of normal adult human brain structure,” Human Brain Mapping, vol. 31, no. 5, pp. 798-819, 2010. doi: 10.1002/hbm.20906) Tutorial files: Because of long calculation times particular interim results are provided generated geometry (geometry.xdmf + geometry.h5) edema mapping (edema.xdmf + edema.h5) cerebrospinal fluid distribution file (csf.nii.gz) white matter distribution (wm.nii.gz) grey matter distribution (gm.nii.gz) Segmented tumor distribution, class 0 (tumor_class_pve_0.nii.gz) Segmented tumor distribution, class 1 (tumor_class_pve_1.nii.gz) Segmented tumor distribution, class 2 (tumor_class_pve_2.nii.gz) Folder of magnetic resonance images of first Authors head in DICOM format (T1 and Flair modality) First six datasets (T1, T1ce, T2, Flair, tumour segmentation) of BraTS 2020 collection (https://www.med.upenn.edu/cbica/brats2020/data.html) Five different weights of tumor segmentations with different input channels. Herein, either one of T1, T1ce, T2, Flair or all of them are used. Additionally, an empty image is included for an adaptive training. The data corresponds to OncoFEM version 1.0. The up-to-date Version of oncofem can be obtained from github or Version 1.0 from DaRUS, which comes in a pre-installed virtual box. For usage, download and unzip file next to the oncofem folder or adjust the paths stored in the config.ini file if the files need to be somewhere else.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Research Data Management Plan (RDMP) of the priority program SPP 2170 is the formal document that should help to mangage the handling of data. Since enormous amounts of research data (Big Data) will be generated, the exchange and access to the data should be ensured. Every experiment in the laboratory, or every simulation generates huge amounts of unstructured data. To make these findable, accessible, interoperable, and reusable (FAIR), discipline-specific criteria must be defined in addition to the hardware and software that form the general platform. Therefore the RDMP of the DFG-funded priority program SPP2170 describes how this information could be processed in the future.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Instructions for the first steps with DaRUS
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Background and Objective: Many biomedical, clinical, and industrial applications may benefit from musculoskeletal simulations. Three-dimensional macroscopic muscle models (3D models) can more accurately represent muscle architecture than their 1D (line-segment) counterparts. Nevertheless, 3D models remain underutilised in academic, clinical, and commercial environments. Among the reasons for this is a lack of modelling and simulation standardisation, verification, and validation. Here, we strive towards a solution by providing an open-access, characterised, constitutive relation for 3D musculotendon models. Methods: The musculotendon complex is modelled following the state- of-the-art active stress approach and is treated as hyperelastic, transversely isotropic, and nearly incompressible. Furthermore, force-length and -velocity relationships are incorporated, and muscle activation is derived from motor-unit information. The constitutive relation was implemented within the commercial finite-element software package Abaqus as a user-subroutine. A masticatory system model with left and right masseters was used to demonstrate active and passive movement. Results: The constitutive relation was characterised by various experimental data sets and was able to capture a wide variety of passive and active behaviours. Furthermore, the masticatory simulations revealed that joint movement was sensitive to the muscle’s in-fibre passive response. Conclusions: This user-material provides a “plug and play” template for 3D neuro-musculoskeletal finite-element modelling. We hope that this reduces modelling effort, fosters exchange, and contributes to the standardisation of such models. Information about the parameters can be found in the readme.pdf
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset containing channel state information (CSI) alongside ground truth data (position tags, timestamps) of a massive MIMO-OFDM system measured with the DICHASUS channel sounder. Measurement parameters and machine-readable file format descriptions are provided in a JSON file (spec.json). Distributed measurement with two separate antenna arrays in an indoor lab room. Mostly line-of-sight dataset with vacuum robot-mounted transmitter.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Research Data Management (RDM) describes the collection, preservation, and sharing of data created or used in a research project. SimTech’s Data and Software Management team offers expertise and resources to develop and implement sustainable RDM in your SimTech project (for free). The following form serves to assess the needed support. If you have any questions about your project idea or about this form, contact us at rdm@simtech.uni-stuttgart.de.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains benchmark data, generated with numerical simulation based on different PDEs, namely 1D advection, 1D Burgers', 1D and 2D diffusion-reaction, 1D diffusion-sorption, 1D, 2D, and 3D compressible Navier-Stokes, 2D Darcy flow, and 2D shallow water equation. This dataset is intended to progress the scientific ML research area. In general, the data are stored in HDF5 format, with the array dimensions packed according to the convention [b,t,x1,...,xd,v], where b is the batch size (i.e. number of samples), t is the time dimension, x1,...,xd are the spatial dimensions, and v is the number of channels (i.e. number of variables of interest). More detailed information are also provided in our Github repository (https://github.com/pdebench/PDEBench) and our submitting paper to NeurIPS 2022 Benchmark track.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Description of the first steps and the most important tasks for a new DaRUS-Admin
https://www.gnu.org/licenses/gpl-3.0-standalone.htmlhttps://www.gnu.org/licenses/gpl-3.0-standalone.html
This dataset consists of software code associated with the publication titled "Rayleigh Invariance Enables Estimation of Effective CO2 Fluxes Resulting from Convective Dissolution in Water-Filled Fractures." It includes a Dockerimage that contains the precompiled code for immediate use. For transparency, the Dockerfile is also provided. 1 Download the Dataset: Download the compressed Dockerimage wrr_image.tar directly. If you want to inspect the Dockerimage, you can have a look at the associated Dockerfile first. Inside the Dockerfile one will find an instance of git which is privately hosted and not guaranteed to be hosted forever. Source code can also be inspected inside the docker container. 2 Load Docker Image: Load the Docker image from the provided tar.xz file. docker load --input wrr_image.tar 3 Run Docker Container: Run the Docker container with appropriate volume mounts. docker run -v $(pwd)/share/:/home/wrr_user/code/simulations/run/customBoussinesq/share -it wrr_image It might be that the image is called slightly differently for instance wrr_image:latest , one needs to check the terminal output. This command mounts the share directory from your current host directory into the container's /home/wrr_user/code/simulations/run/customBoussinesq/share directory. This allows you to move simulation results outside of the container by moving the results into the share folder. 4 Running Computations: The container is precompiled with necessary resources. You can either submit bash scripts to the cluster scheduler or use the Allrun scripts inside the cases. Move to the desired case and type ./Allrun 4 to run with 4 cores. Note that the computations are resource-intensive and may not work on a local machine, even with an appropriate number of cores set.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Videos showing water molecules at a sodium chloride (NaCl) solid surface for different water content. The force field for the water is TIP4P/epsilon (https://doi.org/10.1021/jp410865y), and the force field for the ions is from Loche et al. (https://doi.org/10.1021/acs.jpcb.1c05303). The trajectories have been generated using the GROMACS simulation package, and the videos have been created using VMD.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Results of a series of performance measurements (frame times) to determine the impact of using the DirectStorage API for rendering time-dependent particle data sets in contrast to using traditional POSIX-style I/O APIs.
https://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/2.0/customlicense?persistentId=doi:10.18419/DARUS-4143https://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/2.0/customlicense?persistentId=doi:10.18419/DARUS-4143
This dataset features both data and code related to the research article titled "Rayleigh Invariance Enables Estimation of Effective CO2 Fluxes Resulting from Convective Dissolution in Water-Filled Fractures." It includes raw data packaged in tarball format, including Python scripts used to derive the results presented in the publication. High-resolution raw data for contour plots is available upon request. 1 Download the Dataset: Download the dataset file using Access Dataset. Ensure you have sufficient disk space available for storing and processing the dataset. 2 Extract the Dataset: Once the dataset file is downloaded, extract its contents. The dataset is compressed in a tar.xz format. Use appropriate tools to extract it. For example, in Linux, you can use the following command: tar -xf Publication_CCS.tar.xz tar -xf Publication_Karst.tar.xz tar -xf Validation_Sim.tar.xz This will create a directory containing the dataset files. 3 Install Required Python Packages: Before running any code, ensure you have the necessary Python (version 3.10 tested) packages installed. The required packages and their versions are listed in the requirements.txt file. You can install the required packages using pip: pip install -r requirements.txt 4 Run the Post Processing Script: After extracting the dataset and installing the required Python packages, you can run the provided post processing script. The post processing script (post_process.py) is designed to replicate all the plots from a publication based on the dataset. Execute the script using Python: python3 post_process.py This script will generate the plots and output them to the specified directory. 5 Explore and Analyze: Once the script has completed running, you can explore the generated plots to gain insights from the dataset. Feel free to modify the script or use the dataset in your own analysis and experiments. High-resolution data, such as the vtu's for contour plots is available upon request; please feel free to reach out if needed. 6 Small Grid Study: There is a tarball for the data that was generated to study the grid used in the related publication. tar -xf Publication_CCS.tar.xz If you unpack the tarball and have the requirements from above installed, you can use the python script to generate the plots. 7 Citation: If you use this dataset in your research or publication, please cite the original source appropriately to give credit to the authors and contributors.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Fatalities with semi-automated vehicles typically occur when users are engaged in non-driving related tasks (NDRTs) that compromise their situational awareness (SA). This work developed a tactile display for on-body notification to support situational awareness, thus enabling users to recognize vehicle automation failures and intervene if necessary. We investigated whether such tactile notifications support "event detection'' (SA-L1) or 2 "anticipation'" (SA-L3). Using a simulated automated driving scenario, a between-groups study contrasted SA-L1 and SA-L3 tactile notifications that respectively displayed the spatial positions of surrounding traffic or future projection of the automated vehicle's position. Our participants were engaged in an NDRT, i.e., an Operation Span Task that engaged visual working memory (WM) resources. They were instructed to intervene if the tactile display contradicted the driving scenario, thus indicating vehicle sensing failures. On a single critical trial, we introduced a failure that could have resulted in a vehicle collision. SA-L1 tactile displays of potential collision targets resulted in less subjective workload on the NDRT than SA-L3, which indicated the vehicle's future actions. These findings and qualitative questionnaire suggest that the simplicity of SA-L1 display required less mental resources, which allowed participants to better interpret sensing failures in vehicle automation. We make available data on intervention performance (distance, Maximum intensity, Time to Collision), WM performance (Attention and WM interference), qualitative questionnaire (NASA-TLX and SART), together with subjective questions from the semistructured interview and Unity VR environment.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Sixteen protein sequences for enzymes with known activity against polyethylene terephthalate (PET) were clustered using CD-HIT to derive a reduced set of twelve centroid sequences. These twelve protein sequences were aligned in a structure-guided multiple sequence alignment by T-COFFEE. A profile hidden Markov model (HMM) was derived from this multiple sequence alignment by HMMER.
https://www.apache.org/licenses/LICENSE-2.0https://www.apache.org/licenses/LICENSE-2.0
This dataset contains supplementary code for the paper Fast Sparse Grid Operations using the Unidirectional Principle: A Generalized and Unified Framework. The code is also provided on GitHub. Here, we additionally provide the runtime measurement data generated by the code, which was used to generate the runtime plot in the paper. For more details, we refer to the file README.md.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data from a Survey at the University of Stuttgart about the relevance of different metadata fields for describing data from the engineering sciences.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The results of EICP (Enzyme Induced Calcite Precipitation) experiments. This dataset includes pressure measurements and 3D reconstructed X-ray images during the conducted experiments. The experiment was performed on two Borosillicates Glass beads Columns (BGC). The images were taken with so called "Low-dose" strategy by minimizing the exposure time and projection acquisition in order to boost data acquisition time (6 min / dataset). Later, the low quality of acquired images affected by adopted strategy was enhanced by ML algorithm. The codes which were used for the post-processing in order to enhance the image quality can be found at https://doi.org/10.18419/darus-2991. The image data was recorded at the time steps 1, 2, 4, 6, 8, 10 and 12 hours of EICP experiment.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
We compare how well participants can determine the geographical direction of an animated map transition. In our between-subject online study, each of three groups is shown map transitions in one map projection: Mercator, azimuthal equidistant projection, or two-point equidistant projection. The distances of the start and end point are varied. Map transitions zoom out and pan towards the middle point, then zoom in and continue panning, following the recommendations by Van Wijk and Nuij (IEEE InfoVis, 2003). We measure response time and accuracy in the task. We evaluate the results by the sample means per participant, using interval estimation with 95% confidence intervals. We construct the confidence intervals by using BCa bootstrapping. The study is pre-registered on OSF.io, but due to file size limitations, we were not able to submit the video stimuli there. Instead, we provide them here. This repository contains the MPEG-4 video files that were shown to the participants in the videos/ folder. These are numbered from 0 to 1199 for each of the three map projections, which are also stated in the file name, for a total of 3,600 video stimuli. An additional 3×6 example stimuli are also included. For each video stimulus, a JSON file with the same prefix file name (projection + number) is located in the metadata/ folder. These files contain the ground truth metadata for the respective stimulus. The stimuli shown for teaching the participants the task are located with the same structure under the examples/ folder. The entire source code for the study is also available in the related publication. The related repository includes: The code for generating the individual PNG frames, and JSON metadata, for each stimulus. The server and front-end code for the online study itself. The Python and R code for evaluating the study results.
Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
The dataset offers sequences that are optimized for nonlinearity estimation. The sequences are compliant to IEEE 802.11 standards and are given as binary phase shift keying (BPSK) modulated orthogonal frequency division multiplexing (OFDM) symbol in frequency domain. The sequences are given for various numbers of total subcarriers and positions of occupied subcarriers that match to the training fields defined in the IEEE 802.11 standards. All sequences are normalized to unit power and comply to a maximum peak-to-average-power (PAPR) constraint of 13.05 dB. For each sequence format, one CSV file is given. The columns hold optimized sequences for nonlinearity estimation with orthonormal Laguerre polynomials for each estimation order from P=3 up to P=10.
https://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/1.1/customlicense?persistentId=doi:10.18419/DARUS-3100https://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/1.1/customlicense?persistentId=doi:10.18419/DARUS-3100
The supplemental materials of the paper titled Comparative Evaluation of Bipartite, Node-Link, and Matrix-Based Network Representations, which was accepted for presentation at IEEE VIS 2022 conference. The structure of the folder is as follows: . └── code |── NetworkGeneration # the R code to generate the network data |── NetworkVis # the code that used to generate the visual stimuli |── BPStudy # the code to re-run the entire study |── BPStudy\stimuli # the visual stimuli used throughout the study └── StatisticalAnalysis # the code used to perform the statistical analysis └── data |── network_data # the network data files in json format └── stats_data # the raw data used for statistical analysis └── extra |── force-layout # The code and data for the force-layout experiment └── screen_shots.pdf # screenshots of study └── README.md # this file Please check the README.md file within each directory for further information on how to use/run the code.