Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Instructions for the first steps with DaRUS
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains benchmark data, generated with numerical simulation based on different PDEs, namely 1D advection, 1D Burgers', 1D and 2D diffusion-reaction, 1D diffusion-sorption, 1D, 2D, and 3D compressible Navier-Stokes, 2D Darcy flow, and 2D shallow water equation. This dataset is intended to progress the scientific ML research area. In general, the data are stored in HDF5 format, with the array dimensions packed according to the convention [b,t,x1,...,xd,v], where b is the batch size (i.e. number of samples), t is the time dimension, x1,...,xd are the spatial dimensions, and v is the number of channels (i.e. number of variables of interest). More detailed information are also provided in our Github repository (https://github.com/pdebench/PDEBench) and our submitting paper to NeurIPS 2022 Benchmark track.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Darius is a dataset for instance segmentation tasks - it contains Tail Up annotations for 1,825 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
This dataset contains in-depth statistics for League of Legends matchups involving Darius. The data is the result of extensive analysis of hundreds of thousands to millions of games, processed through a data pipeline directly from the Riot API. This page features interactive UI elements, including filter buttons for roles (Top, Mid, Jungle, Bot, Support) and a search bar to filter matchups by opponent champion. Our unique data provides unparalleled insights that are not available on competitor websites. Including matchup-specific tips, skill order, rune setups, summoner spells, item builds, and detailed in-game statistics such as gold differences, damage metrics, KDA ratios, gank frequencies, cooldown comparisons, and experience point differences. Whether you're looking to improve your gameplay or gain a competitive edge, this dataset is an invaluable resource for mastering the Darius matchups.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Satellite Altimetry-based Extension of global-scale in situ river discharge Measurements (SAEM) dataset provides a comprehensive solution for addressing gaps in river discharge measurements by leveraging satellite altimetry. This dataset offers enhanced coverage for river discharge estimations by utilizing data from multiple satellite missions and integrating it with existing river gauge networks. It supports sustainable development and helps address complex water-related challenges exacerbated by climate change. The first version of SAEM includes (1) height-based discharge estimates for 8,730 river gauges, covering approximately 88% of the total gauged discharge volume globally. These estimates demonstrate a median Kling-Gupta Efficiency (KGE) of 0.48, surpassing the performance of current global datasets. (2) Catalog of Virtual Stations (VSs): a catalog of VSs defined by specific criteria, including each station’s coordinates, associated satellite altimetry missions, distance to discharge gauges, and quality flags. (3) Altimetric Water Level Time Series: time series data of water levels from VSs that provide high-quality discharge estimates. The water level data are sourced from both existing Level-3 datasets and newly generated data within this study, including contributions from Hydroweb.Next, DAHITI, GRRATS, and HydroSat. Non-parametric quantile mapping functions: for VSs, which model the transformation of water level time series into discharge data using a Nonparametric Stochastic Quantile Mapping Function approach.
Facebook
Twitterhttps://okredo.com/en-lt/general-ruleshttps://okredo.com/en-lt/general-rules
UAB "Darus" financial data: profit, annual turnover, paid taxes, sales revenue, equity, assets (long-term and short-term), profitability indicators.
Facebook
TwitterAttribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
The dataset offers sequences that are optimized for nonlinearity estimation. The sequences are compliant to IEEE 802.11 standards and are given as binary phase shift keying (BPSK) modulated orthogonal frequency division multiplexing (OFDM) symbol in frequency domain. The sequences are given for various numbers of total subcarriers and positions of occupied subcarriers that match to the training fields defined in the IEEE 802.11 standards. All sequences are normalized to unit power and comply to a maximum peak-to-average-power (PAPR) constraint of 13.05 dB. For each sequence format, one CSV file is given. The columns hold optimized sequences for nonlinearity estimation with orthonormal Laguerre polynomials for each estimation order from P=3 up to P=10.
Facebook
TwitterThis dataset provides information about the number of properties, residents, and average property values for Darius Pearce Road cross streets in Youngsville, NC.
Facebook
Twitterhttps://spdx.org/licenses/MIT.htmlhttps://spdx.org/licenses/MIT.html
BayesValidRox is an open-source python package that provides methods for surrogate modeling, Bayesian inference and model comparison. Release 2.0.0 improves the modularity and uniformity of the package and introduces template classes for surrogate models and methods for generating posterior samples.(2025-02-05)
Facebook
Twitterhttps://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/2.0/customlicense?persistentId=doi:10.18419/DARUS-4143https://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/2.0/customlicense?persistentId=doi:10.18419/DARUS-4143
This dataset features both data and code related to the research article titled "Rayleigh Invariance Enables Estimation of Effective CO2 Fluxes Resulting from Convective Dissolution in Water-Filled Fractures." It includes raw data packaged in tarball format, including Python scripts used to derive the results presented in the publication. High-resolution raw data for contour plots is available upon request. 1 Download the Dataset: Download the dataset file using Access Dataset. Ensure you have sufficient disk space available for storing and processing the dataset. 2 Extract the Dataset: Once the dataset file is downloaded, extract its contents. The dataset is compressed in a tar.xz format. Use appropriate tools to extract it. For example, in Linux, you can use the following command: tar -xf Publication_CCS.tar.xz tar -xf Publication_Karst.tar.xz tar -xf Validation_Sim.tar.xz This will create a directory containing the dataset files. 3 Install Required Python Packages: Before running any code, ensure you have the necessary Python (version 3.10 tested) packages installed. The required packages and their versions are listed in the requirements.txt file. You can install the required packages using pip: pip install -r requirements.txt 4 Run the Post Processing Script: After extracting the dataset and installing the required Python packages, you can run the provided post processing script. The post processing script (post_process.py) is designed to replicate all the plots from a publication based on the dataset. Execute the script using Python: python3 post_process.py This script will generate the plots and output them to the specified directory. 5 Explore and Analyze: Once the script has completed running, you can explore the generated plots to gain insights from the dataset. Feel free to modify the script or use the dataset in your own analysis and experiments. High-resolution data, such as the vtu's for contour plots is available upon request; please feel free to reach out if needed. 6 Small Grid Study: There is a tarball for the data that was generated to study the grid used in the related publication. tar -xf Publication_CCS.tar.xz If you unpack the tarball and have the requirements from above installed, you can use the python script to generate the plots. 7 Citation: If you use this dataset in your research or publication, please cite the original source appropriately to give credit to the authors and contributors.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The benchmark dataset was generated through a comprehensive simulation study of the deep drawing process for DP600 sheet metal, incorporating variations in geometry, material properties, and process parameters. The simulations were based on the deep drawing of modified quadratic cups with a length of 210 mm and a drawing depth of 30 mm. Three distinct base geometries - Concave, Convex, and Rectangular - were derived from a rectangular reference shape, with key geometric parameters varied in two increments (minimum and maximum). For each geometry, material and process parameters such as the hardening factor (MAT), friction coefficient (FC), sheet thickness (SHTK), and binder force (BF) were systematically varied, resulting in 32,076 unique simulations. Each simulation included stress, strain, thickness distribution, and nodal displacement data for the deep drawing and subsequent springback analysis. The simulation data were stored in HDF5 format, with metadata linking each dataset to its corresponding geometry, material, and process parameters. This structured format ensures efficient retrieval and processing of simulation results, facilitating further analysis and benchmarking.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset covers a gigamapping of game design and execution of the game GoCOLife that introduces a more-than-human perspective to the players. The data were produced as part of a studio course 'COLife: More-than-Human Perspective to CoDesign' in the summer semester 2024.
Facebook
Twitterhttps://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/2.1/customlicense?persistentId=doi:10.18419/DARUS-3884https://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/2.1/customlicense?persistentId=doi:10.18419/DARUS-3884
Understanding the link between visual attention and user’s needs when visually exploring information visualisations is under-explored due to a lack of large and diverse datasets to facilitate these analyses. To fill this gap, we introduce SalChartQA - a novel crowd-sourced dataset that uses the BubbleView interface as a proxy for human gaze and a question-answering (QA) paradigm to induce different information needs in users. SalChartQA contains 74,340 answers to 6,000 questions on 3,000 visualisations. Informed by our analyses demonstrating the tight correlation between the question and visual saliency, we propose the first computational method to predict question-driven saliency on information visualisations. Our method outperforms state-of-the-art saliency models, improving several metrics, such as the correlation coefficient and the Kullback-Leibler divergence. These results show the importance of information needs for shaping attention behaviour and paving the way for new applications, such as task-driven optimisation of visualisations or explainable AI in chart question-answering. The files of this dataset are documented in README.md.
Facebook
Twitterhttps://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.18419/DARUS-4059https://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.18419/DARUS-4059
Roaming Autonomous Distributed robot (RADr) is a two-wheeled mobile robot that can assemble hexagonal digital materials. This dataset contains the 3D models for a RADr and a digital material that the RADr can actively grab with an electromagnet and move around (in 3dm and STEP file format). On top of each, there are retroreflective markers so that can be tracked by a motion capture system such as Optitrack or Vicon. The dataset also contains the robot code that is uploaded to the robot to enable its wireless control via MQTT. MQTT is a standard messaging protocol for the Internet of Things (IoT). The electronic components on the robot are as follows: Arduino Uno Wi-Fi Rev2 2 ROBOTIS DYNAMIXEL XL430-W250-T with Wheel Set for TurtleBot3 ROBOTIS DYNAMIXEL Motor Shield 11.1 V 2250 mAh Lipo Battery 8kg Electromagnet MOSFET
Facebook
Twitterhttps://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.18419/DARUS-3796https://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.18419/DARUS-3796
This code/data allows you reproduce the results of the paper: "A physiologically enhanced muscle spindle model: using a Hill-type model for extrafusal fibers as template for intrafusal fibers" by P. F. S. Chacon, M. Hammer, I. Wochner, J. R. Walter and S. Schmitt. Always cite the paper together with this dataset because this dataset is not self-explanatory. More information about the dataset can be found here: README.md.
Facebook
Twitterhttps://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.18419/DARUS-4099https://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.18419/DARUS-4099
Data and scripts for replicating results and the investigation presented in the paper. This includes the dft parameters for generating training data, all training and data selection scripts for the neural networks, scripts for running and analysing the production simulations with the trained potentials.
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
In this repository the data files of OncoFEM are collected. These are in the following listed and cited where appropriate: SRI24 atlas T1 and T2 MRI modalities (T. Rohlfing, N. M. Zahr, E. V. Sullivan, and A. Pfefferbaum, “The SRI24 multichannel atlas of normal adult human brain structure,” Human Brain Mapping, vol. 31, no. 5, pp. 798-819, 2010. doi: 10.1002/hbm.20906) Tutorial files: Because of long calculation times particular interim results are provided generated geometry (geometry.xdmf + geometry.h5) edema mapping (edema.xdmf + edema.h5) cerebrospinal fluid distribution file (csf.nii.gz) white matter distribution (wm.nii.gz) grey matter distribution (gm.nii.gz) Segmented tumor distribution, class 0 (tumor_class_pve_0.nii.gz) Segmented tumor distribution, class 1 (tumor_class_pve_1.nii.gz) Segmented tumor distribution, class 2 (tumor_class_pve_2.nii.gz) Folder of magnetic resonance images of first Authors head in DICOM format (T1 and Flair modality) First six datasets (T1, T1ce, T2, Flair, tumour segmentation) of BraTS 2020 collection (https://www.med.upenn.edu/cbica/brats2020/data.html) Five different weights of tumor segmentations with different input channels. Herein, either one of T1, T1ce, T2, Flair or all of them are used. Additionally, an empty image is included for an adaptive training. The data corresponds to OncoFEM version 1.0. The up-to-date Version of oncofem can be obtained from github or Version 1.0 from DaRUS, which comes in a pre-installed virtual box. For usage, download and unzip file next to the oncofem folder or adjust the paths stored in the config.ini file if the files need to be somewhere else.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Models trained with Heat Plume Prediction and datasets prepared with Heat Plume Prediction into reasonable format, reduced set of in/outputs, 2D, normalization used to train these models. Last relevant git commit: cae87d68faf96b2bd8dab935. Based on raw data from doi:darus-4156.
Facebook
Twitterhttps://spdx.org/licenses/BSD-3-Clause.htmlhttps://spdx.org/licenses/BSD-3-Clause.html
This repository includes the data and the code of the (soon to be published) paper: "Bioinspired Morphology and Task Curricula for Learning Locomotion in Bipedal Muscle-Actuated Systems" by Nadine Badie, Firas Al-Hafez, Pierre Schumacher, Daniel F.B. Haeufle, Jan Peters, and Syn Schmitt. Please always cite the paper in combination with this dataset as it is not self-explanatory.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains supplemental data for the article "Spectral Normalization and Voigt-Reuss net: A universal approach to microstructure‐property forecasting with physical guarantees", accepted for publication in GAMM-Mitteilungen by Sanath Keshav, Julius Herb, and Felix Fritzen [1]. The data contained in this DaRUS repository acts as an extension to the GitHub repository for the so-called Voigt-Reuss net. The data in this dataset is generated by solving thermal homogenization problems for an abundance of different microstructures. The microstructures are defined by periodic representative volume elements (RVE) and periodic boundary conditions are applied to the temperature fluctuations. We consider bi-phasic two-dimensional microstructures with a resolution of 400 × 400 pixels, as published in [2], and three-dimensional microstructures with a resolution of 192 × 192 × 192 voxels, as published in [3]. For both microstructure datasets, we provide the effective thermal conductivity tensor that is obtained by solving homogenization problems on the full microstructure for different material parameters in the two phases. For the simulation, we used our implementation of Fourier-Accelerated Nodal Solvers (FANS, [4]) that is based on a Finite Element Method (FEM) discretization. Further details are provided in the README.md file of this dataset, in our manuscript [1], and in the GitHub repository. [1] Keshav, S., Herb, J., and Fritzen, F. (2025). Spectral Normalization and Voigt–Reuss net: A universal approach to microstructure‐property forecasting with physical guarantees, GAMM‐Mitteilungen. (2025), e70005. https://doi.org/10.1002/gamm.70005 [2] Lißner, J. (2020). 2d microstructure data (Version V2) [dataset]. DaRUS. https://doi.org/doi:10.18419/DARUS-1151 [3] Prifling, B., Röding, M., Townsend, P., Neumann, M., and Schmidt, V. (2020). Large-scale statistical learning for mass transport prediction in porous materials using 90,000 artificially generated microstructures [dataset]. Zenodo. https://doi.org/10.5281/zenodo.4047774 [4] Leuschner, M., and Fritzen, F. (2018). Fourier-Accelerated Nodal Solvers (FANS) for homogenization problems. Computational Mechanics, 62(3), 359-392. https://doi.org/10.1007/s00466-017-1501-5
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Instructions for the first steps with DaRUS