Facebook
TwitterThis data collection consists of archived GOES-R Series Magnetometer (MAG) and Goddard Magnetometer (GMAG) Level 0 data from the operational GOES-East and GOES-West satellites. The Geostationary Operational Environmental Satellite-R (GOES-R) series provides continuity of the GOES mission through 2035 and improvements in geostationary satellite observational data. GOES-16, the first GOES-R satellite, began operating as GOES-East on December 18, 2017, and GOES-18 began operating on March 1, 2022 replacing GOES-17 as GOES West in early January 2023. GOES-19 began operational service April 7, 2024, replacing GOES-16. MAG measures the magnetic field in the outer portion of the magnetosphere to detect charged particles that can be dangerous to spacecraft and human spaceflight. The Magnetometer (MAG) Level 0 product contains Consultative Committee for Space Data Systems (CCSDS) science, engineering, and diagnostic telemetry data packets received from MAG. The Level 0 data files also contain orbit and attitude, and eclipse of the sun related and yaw flip state telemetry data packets generated by the GOES spacecraft. Each CCSDS packet contains a unique Application Process Identifier (APID) in the primary header that identifies the specific type of packet, and is used to support interpretation of its contents. Users may refer to the GOES-R Series Product Definition and Users’ Guide (PUG) Volumes 1 (Main) and 2 (Level 0 Products) for Level 0 data documentation. Related instrument calibration data and Level 1b processing information are archived and available for order at the NOAA CLASS website. The MAG Level 0 data files are delivered in a netCDF-4 file format, however, the constituent CCSDS packets are stored in a byte array making the data opaque for standard netCDF reader applications. The MAG Level 0 data files are packaged in daily tar files (data bundles) by satellite for the archive. Recently ingested archive tar files are available for 14 days on a CLASS-hosted anonymous FTP server for users to download. Data archived on tape are available to users by special order through NCEI customer service. The GMAG, on board GOES-18 and later satellites, is an upgraded magnetometer instrument that offers improved measurements of Earth’s magnetic field over the magnetometers on GOES-16 and GOES-17.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
An Open Context "types" dataset item. Open Context publishes structured data as granular, URL identified Web resources. This record is part of the "Madaba Plains Project-`Umayri" data publication.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The N-glycan precursor is flipped across the ER membrane, moving it from the cytosolic side to the ER lumenal side. The exact mechanism of this translocation is not well understood but protein RFT1 homolog (RFT1) is known to be involved (Helenius et al. 2002). Defects in RFT1 are associated with congenital disorder of glycosylation 1n (RFT1-CDG, CDG-1n). The disease is a multi-system disorder characterised by under-glycosylated serum glycoproteins. Early-onset developmental retardation, dysmorphic features, hypotonia, coagulation disorders and immunodeficiency are reported features of this disorder. In a patient with RFT1-CDG, Haeuptle et al. identified a homozygous C-T transition at nucleotide 199, resulting in a substitution of cysteine for arginine at codon 67 (R67C) (Haeuptle et al. 2008).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is intended to accompany the paper "Designing Types for R, Empirically" (@ OOPSLA'20, link to paper). This data was obtained by running the Typetracer (aka propagatr) dynamic analysis tool (link to tool) on the test, example, and vignette code of a corpus of >400 extensively used R packages.
Specifically, this dataset contains:
function type traces for >400 R packages (raw-traces.tar.gz);
trace data processed into a more readable/usable form (processed-traces.tar.gz), which was used in obtaining results in the paper;
inferred type declarations for the >400 R packages using various strategies to merge the processed traces (see type-declarations-* directories), and finally;
contract assertion data from running the reverse dependencies of these packages and checking function usage against the declared types (contract-assertion-reverse-dependencies.tar.gz).
A preprint of the paper is also included, which summarizes our findings.
Fair warning Re: data size: the raw traces, once uncompressed, take up nearly 600GB. The already processed traces are in the 10s of GB, which should be more manageable for a consumer-grade computer.
Facebook
TwitterDataset used in the article "The Reverse Problem of Keystroke Dynamics: Guessing Typed Text with Keystroke Timings". Source data contains CSV files with dataset results summaries, false positives lists, the evaluated sentences, and their keystroke timings. Results data contains training and evaluation ARFF files for each user and sentence with the calculated Manhattan and euclidean distance, R metric, and the directionality index for each challenge instance. The source data comes from three free text keystroke dynamics datasets used in previous studies, by the authors (LSIA) and two other unrelated groups (KM, and PROSODY, subdivided in GAY, GUN, and REVIEW). Two different languages are represented, Spanish in LSIA and English in KM and PROSODY.
The original dataset KM was used to compare anomaly-detection algorithms for keystroke dynamics in the article "Comparing anomaly-detection algorithms forkeystroke dynamic" by Killourhy, K.S. and Maxion, R.A. The original dataset PROSODY was used to find cues of deceptive intent by analyzing variations in typing patterns in the article "Keystroke patterns as prosody in digital writings: A case study with deceptive reviews and essay" by Banerjee, R., Feng, S., Kang, J.S., and Choi, Y.
We proposed a method to find, using only flight times (keydown/keydown), whether a medium-sized candidate list of possible texts includes the one to which the timings belong. Nor the text length neither the candidate texts list were restricted, and previous samples of the timing parameters for the candidates were not required to train the model. The method was evaluated using three datasets collected by non-mutually-collaborating sets of authors in different environments. False acceptance and false rejection rates were found to remain below or very near to 1% when user data was available for training. The former increased between two- to three-fold when the models were trained with data from other users, while the latter jumped to around 15%. These error rates are competitive against current methods for text recovery based on keystroke timings, and show that the method can be used effectively even without user-specific samples for training, by recurring to general population data.
Facebook
Twitterhttps://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
We present an ultra-high resolution MRI dataset of an ex vivo human brain specimen. The brain specimen was donated by a 58-year-old woman who had no history of neurological disease and died of non-neurological causes. After fixation in 10% formalin, the specimen was imaged on a 7 Tesla MRI scanner at 100 µm isotropic resolution using a custom-built 31-channel receive array coil. Single-echo multi-flip Fast Low-Angle SHot (FLASH) data were acquired over 100 hours of scan time (25 hours per flip angle), allowing derivation of synthesized FLASH volumes. This dataset provides an unprecedented view of the three-dimensional neuroanatomy of the human brain. To optimize the utility of this resource, we warped the dataset into standard stereotactic space. We now distribute the dataset in both native space and stereotactic space to the academic community via multiple platforms. We envision that this dataset will have a broad range of investigational, educational, and clinical applications that will advance understanding of human brain anatomy in health and disease.
Facebook
Twitterhttk: High-Throughput Toxicokinetics Functions and data tables for simulation and statistical analysis of chemical toxicokinetics ("TK") using data obtained from relatively high throughput, in vitro studies. Both physiologically-based ("PBTK") and empirical (e.g., one compartment) "TK" models can be parameterized for several hundred chemicals and multiple species. These models are solved efficiently, often using compiled (C-based) code. A Monte Carlo sampler is included for simulating biological variability and measurement limitations. Functions are also provided for exporting "PBTK" models to "SBML" and "JARNAC" for use with other simulation software. These functions and data provide a set of tools for in vitro-in vivo extrapolation ("IVIVE") of high throughput screening data (e.g., ToxCast) to real-world exposures via reverse dosimetry (also known as "RTK"). This dataset is associated with the following publication: Pearce , R., C. Strope , W. Setzer , N. Sipes , and J. Wambaugh. (Journal of Statistical Software) HTTK: R Package for High-Throughput Toxicokinetics. Journal of Statistical Software. American Statistical Association, Alexandria, VA, USA, 79(4): 1-26, (2017).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset Description:Monitoring data set accompanying the publication, " Runnels Reverse Mega-pool Expansion and Improve Marsh Resiliency in the Great Marsh, Massachusetts (USA)" in the journal Wetlands (https://doi.org/10.1007/s13157-023-01683-6). Monitoring was conducted by the Coastal Habitat Restoration Team at Jackson Estuarine Laboratory, Unviersity of New Hampshire. Dataset is broken down into 3 components:(1) Compiled dataset of the monitoring data of the project including detailed metadata on monitoring and data analysiys. Metadata and explanaitions for input data to R code can be found in the dataset.(2) Water Level Recorder Analysis R Code - R code used to process tidal water elevations from Hoboware CSV files(3) Multivariate Analysis R Code - R code used to conduct non-metric dimensional ordination, PERMANOVA, and SIMPER analyses on the vegetation datasetAbstract:Coastal ecologists in New England have been implementing a restoration strategy of runnels, or shallow ditches, to enhance drainage of oversaturated and ponding interior marshes. In 2015, runnels were constructed to drain two large and expanding pools in the Great Marsh System of Massachusetts, USA. Vegetation, elevation, and hydrology were monitored using field sampling and remote sensing analysis conducted pre- and post-restoration over seven growing seasons to document the recovery of the vegetation community in the pool and salt marsh platform. Vegetation was monitored with 0.5 m2 plots with all species identified and percent cover estimated per species. Elevation was recorded with either laser level (2015) or RTK-GPS (2016, 2021) in the plots. Water level elevations were monitored with Odyssey capacitance loggers (2015, 2016) and Hobo pressure transducers (2018, 2021).Contact Information:Questions about the data set can be directed towards Grant McKown, james.mckown@unh.edu or jgrantmck@gmail.com
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Additional file 1. Example datasets: A qPCR dataset containing two example files derived from both, a single plate (SP) and a multiple plate (MP1, MP2) experiment conducted in our laboratory. Datasets are semi-colon-separated csv files exported from qPCRsoft 4.1 software and can be used as input for qRAT.
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
If you use this dataset for your work, please cite the related papers: A. Vysocky, S. Grushko, T. Spurny, R. Pastor and T. Kot, Generating Synthetic Depth Image Dataset for Industrial Applications of Hand Localisation, in IEEE Access, 2022, doi: 10.1109/ACCESS.2022.3206948.
S. Grushko, A. Vysocký, J. Chlebek, P. Prokop, HaDR: Applying Domain Randomization for Generating Synthetic Multimodal Dataset for Hand Instance Segmentation in Cluttered Industrial Environments. preprint in arXiv, 2023, https://doi.org/10.48550/arXiv.2304.05826
The HaDR dataset is a multimodal dataset designed for human-robot gesture-based interaction research, consisting of RGB and Depth frames, with binary masks for each hand instance (i1, i2, single class data). The dataset is entirely synthetic, generated using Domain Randomization technique in CoppeliaSim 3D. The dataset can be used to train Deep Learning models to recognize hands using either a single modality (RGB or depth) or both simultaneously. The training-validation split comprises 95K and 22K samples, respectively, with annotations provided in COCO format. The instances are uniformly distributed across the image boundaries. The vision sensor captures depth and color images of the scene, with the depth pixel values scaled into a single channel 8-bit grayscale image in the range [0.2, 1.0] m. The following aspects of the scene were randomly varied during generation of dataset: • Number, colors, textures, scales and types of distractor objects selected from a set of 3D models of general tools and geometric primitives. A special type of distractor – an articulated dummy without hands (for instance-free samples) • Hand gestures (9 options). • Hand models’ positions and orientations. • Texture and surface properties (diffuse, specular and emissive properties) and number (from none to 2) of the object of interest, as well as its background. • Number and locations of directional lights sources (from 1 to 4), in addition to a planar light for ambient illumination. The sample resolution is set to 320×256, encoded in lossless PNG format, and contains only right hand meshes (we suggest using Flip augmentations during training), with a maximum of two instances per sample.
Test dataset (real camera images): Test dataset containing 706 images was captured using a real RGB-D camera (RealSense L515) in a cluttered and unstructured industrial environment. The dataset comprises various scenarios with diverse lighting conditions, backgrounds, obstacles, number of hands, and different types of work gloves (red, green, white, yellow, no gloves) with varying sleeve lengths. The dataset is assumed to have only one user, and the maximum number of hand instances per sample was limited to two. The dataset was manually labelled, and we provide hand instance segmentation COCO annotations in instances_hands_full.json (separately for train and val) and full arm instance annotations in instances_arms_full.json. The sample resolution was set to 640×480, and depth images were encoded in the same way as those of the synthetic dataset.
Channel-wise normalization and standardization parameters for datasets
| Dataset | Mean (R, G, B, D) | STD (R, G, B, D) |
|---|---|---|
| Train | 98.173, 95.456, 93.858, 55.872 | 67.539, 67.194, 67.796, 47.284 |
| Validation | 99.321, 97.284, 96.318, 58.189 | 67.814, 67.518, 67.576, 47.186 |
| Test | 123.675, 116.28, 103.53, 35.3792 | 58.395, 57.12, 57.375, 45.978 |
If you use this dataset for your work, please cite the related papers: A. Vysocky, S. Grushko, T. Spurny, R. Pastor and T. Kot, Generating Synthetic Depth Image Dataset for Industrial Applications of Hand Localisation, in IEEE Access, 2022, doi: 10.1109/ACCESS.2022.3206948.
S. Grushko, A. Vysocký, J. Chlebek, P. Prokop, HaDR: Applying Domain Randomization for Generating Synthetic Multimodal Dataset for Hand Instance Segmentation in Cluttered Industrial Environments. preprint in arXiv, 2023, https://doi.org/10.48550/arXiv.2304.05826
Facebook
TwitterAttribution-NoDerivs 4.0 (CC BY-ND 4.0)https://creativecommons.org/licenses/by-nd/4.0/
License information was derived automatically
This dataset corresponds to remote sensing image patches, whose scene clippings cover only water supply dams in the State of São Paulo, Brazil. The study area is composed of nine dams from the state of Sao Paulo which is one of the Brazilian states more affected by drought, and such dams are shown in Figure 1. According to [SABESP 2021], the Atibainha, Jacareí, Jaguari dams are from the Cantareira system of water; Billings and Pedro Beicht dams respectively belong to the Guarapiranga and Alto Cotia systems water. The Itupararanga and Barra Bonita dams belong to Sorocaba and Medio Tietê Hydrographic Basin, the Serraria and Serraria dams belong to Ribeira de Iguape and South Coast Hydrographic Basin, according to [SIGRH 2020].
The training set images were composed of NIR, R and G bands in order to highlight the edges of the dams, based on the work of [Namikawa et al. 2019]. In Figure 2, colours pink, red and green represent the NIR, R, and G bands, respectively. Then, the regions of interest were cut, in the proportion of 224 × 224 pixels, generating a total of 770 images.
Based on hydrological data provided by [SABESP 2021], the images of the training and testing datasets were classified into
i) Normal: volume of the dam and greater than 60% of the total capacity; ii) Low: volume is between 40% to 60% of full capacity; iii) Critical: volume is less than 40% of the total capacity.
Thus, 353, 239 and 178 images were obtained for classes normal, low and critical, respectively. Given the difference in the amount of data between the threes classes, we used the static data augmentation technique, which applies transformations to images such as rotate and flip. In addition, it was possible to increase and balance the number of images in each category, also, helping to equalize the process of training, avoiding that model learns more about one of the classes. We obtained 1,527 images per class, in total 4,581 training samples.
Regarding the test dataset, the images were composed using the NIR, R and G bands. Each multi-spectral image generated in the composition, with a spatial resolution of 8 meters, was used in the fusion with its respective panchromatic raster, thus generating a multi-spectral image with a spatial resolution of 2 meters. Finally, we cropped the images with the dimensions 224 × 224 pixels and obtained 100 images in total, where 32 are for the critical, 34 for the low and 34 for the normal classes.
Read more about the dataset and the experiment in this paper.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The R-markdown output data from the random forest analysis of yeast isolation environment classification from KEGG annotation presence/absence data. The R-data can be loaded and it includes the models and subsequent analysis.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In the first part of reverse transcription, minus-strand synthesis, a DNA strand complementary to the HIV genomic RNA is synthesized, using the viral RNA as a template and a host cell lysine tRNA molecule as primer. The synthesis proceeds in two discrete steps, separated by a strand transfer event. As minus strand DNA is synthesized, the viral genomic RNA is degraded, also in several discrete steps. Two specific polypurine tracts (PPT sequences) in the viral RNA, one within the pol gene (central or cPPT) and one immediately preceding the U3 sequence (3' PPT) are spared from degradation and serve to prime synthesis of DNA complementary to the minus strand (plus-strand synthesis). During plus-strand synthesis, Preston and colleagues observed secondary sites of plus-strand initiation at low frequency both in the cell-free system and in cultured virus (Klarman et al., 1997). Both DNA synthesis and RNA degradation activities are catalyzed by the HIV-1 reverse transcriptase (RT) heterodimer.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The data used in this paper is from the 16th issue of SDSS. SDSS-DR16 contains a total of 930,268 photometric images, with 1.2 billion observation sources and tens of millions of spectra. The data obtained in this paper is downloaded from the official website of SDSS. Specifically, the data is obtained through the SkyServerAPI structure by using SQL query statements in the subwebsite CasJobs. As the current SDSS photometric table PhotoObj can only classify all observed sources as point sources and surface sources, the target sources can be better classified as galaxies, stars and quasars through spectra. Therefore, we obtain calibrated sources in CasJobs by crossing SpecPhoto with the PhotoObj star list, and obtain target position information (right ascension and declination). Calibrated sources can tell them apart precisely and quickly. Each calibrated source is labeled with the parameter "Class" as "galaxy", "star", or "quasar". In this paper, observation day area 3462, 3478, 3530 and other 4 areas in SDSS-DR16 are selected as experimental data, because a large number of sources can be obtained in these areas to provide rich sample data for the experiment. For example, there are 9891 sources in the 3462-day area, including 2790 galactic sources, 2378 stellar sources and 4723 quasar sources. There are 3862 sources in the 3478 day area, including 1759 galactic sources, 577 stellar sources and 1526 quasar sources. FITS files are a commonly used data format in the astronomical community. By cross-matching the star list and FITS files in the local celestial region, we obtained images of 5 bands of u, g, r, i and z of 12499 galaxy sources, 16914 quasar sources and 16908 star sources as training and testing data.1.1 Image SynthesisSDSS photometric data includes photometric images of five bands u, g, r, i and z, and these photometric image data are respectively packaged in single-band format in FITS files. Images of different bands contain different information. Since the three bands g, r and i contain more feature information and less noise, Astronomical researchers typically use the g, r, and i bands corresponding to the R, G, and B channels of the image to synthesize photometric images. Generally, different bands cannot be directly synthesized. If three bands are directly synthesized, the image of different bands may not be aligned. Therefore, this paper adopts the RGB multi-band image synthesis software written by He Zhendong et al. to synthesize images in g, r and i bands. This method effectively avoids the problem that images in different bands cannot be aligned. The pixel of each photometry image in this paper is 2048×1489.1.2 Data tailoringThis paper first clipped the target image, image clipping can use image segmentation tools to solve this problem, this paper uses Python to achieve this process. In the process of clipping, we convert the right ascension and declination of the source in the star list into pixel coordinates on the photometric image through the coordinate conversion formula, and determine the specific position of the source through the pixel coordinates. The coordinates are regarded as the center point and clipping is carried out in the form of a rectangular box. We found that the input image size affects the experimental results. Therefore, according to the target size of the source, we selected three different cutting sizes, 40×40, 60×60 and 80×80 respectively. Through experiment and analysis, we find that convolutional neural network has better learning ability and higher accuracy for data with small image size. In the end, we chose to divide the surface source galaxies, point source quasars, and stars into 40×40 sizes.1.3 Division of training and test dataIn order to make the algorithm have more accurate recognition performance, we need enough image samples. The selection of training set, verification set and test set is an important factor affecting the final recognition accuracy. In this paper, the training set, verification set and test set are set according to the ratio of 8:1:1. The purpose of verification set is used to revise the algorithm, and the purpose of test set is used to evaluate the generalization ability of the final algorithm. Table 1 shows the specific data partitioning information. The total sample size is 34,000 source images, including 11543 galaxy sources, 11967 star sources, and 10490 quasar sources.1.4 Data preprocessingIn this experiment, the training set and test set can be used as the training and test input of the algorithm after data preprocessing. The data quantity and quality largely determine the recognition performance of the algorithm. The pre-processing of the training set and the test set are different. In the training set, we first perform vertical flip, horizontal flip and scale on the cropped image to enrich the data samples and enhance the generalization ability of the algorithm. Since the features in the celestial object source have the flip invariability, the labels of galaxies, stars and quasars will not change after rotation. In the test set, our preprocessing process is relatively simple compared with the training set. We carry out simple scaling processing on the input image and test input the obtained image.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Original paper: Miyawaki Y, Uchida H, Yamashita O, Sato M, Morito Y, Tanabe HC, Sadato N & Kamitani Y (2008) Visual Image Reconstruction from Human Brain Activity using a Combination of Multiscale Local Image Decoders. Neuron 60:915-929.
This is the fMRI data from Miyawaki et al. (2008) "Visual image reconstruction from human brain activity using a combination of multiscale local image decoders". Neuron 60:915-29. In this study, we collected fMRI activity from subjects viewing images, and constructed decoders predicting local image contrast at multiple spatial scales. The combined decoders based on a linear model successfully reconstructed presented stimuli from fMRI activity.
The experiment consisted of human subjects viewing contrast-based images of 12 x 12 flickering patches. There were two types of image viewing tasks: (1) random image viewing and (2) figure image (geometric shape or alphabet letter) viewing. For image presentation, a block design was used with rest periods between the presentation of each image. For random image patch presentation, images were presented for 6 s, followed by 6 s rest. For figure image presentation, images were presented for 12 s, followed by 12 s rest. The data from random image viewing runs were used to train the decoding models, and the trained model were evaluated with the data from figure image viewing runs.
This dataset contains two subjects ('sub-01' and 'sub-02'). The subjects performed two sessions of fMRI experiments ('ses-01' and 'ses-02'). Each session is composed of several EPI runs (TR, 2000 ms; TE, 30 ms; flip angle, 80°; voxel size, 3 × 3 × 3 mm; FOV, 192 × 192 mm; number of slices, 30, slice gap, 0 mm) and inplane T2-weighted imaging (TR, 6000 ms; TE, 57 ms; flip angle, 90°; voxel size, 0.75 × 0.75 × 3.0 mm; FOV, 192 × 192 mm). The EPI images covered the entire occipital lobe. The dataset also includes a T1-weighted anatomical reference image for each subject (TR, 2250 ms; TE, 2.98 ms for sub-01 and 3.06 ms for sub-02; TI, 900 ms; flip angle, 9°; voxel size, 1.0 × 1.0 × 1.0 mm; FOV, 256 × 256 mm). The T1w images were obtained in sessions different from the fMRI experiment sessions and stored in 'ses-anat' directories. The T1w images were defaced by pydeface (https://pypi.python.org/pypi/pydeface). All DICOM files are converted to Nifti-1 files by mri_convert in FreeSurfer. In addition, the dataset contains mask images of manually defined ROIs for each subjects in sourcedata directory (See README in sourcedata for more details).
During fMRI runs, the subject viewed contrast-based images of 12 × 12 flickering image patches. Two types of runs ('viewRandom' and 'viewFigure') were included in the experiment. In 'viewRandom' runs, random images were presented as visual stimuli. Each 'viewRandom' runs consisted of 22 stimulus presentation trials and lasted for 298 s (149 volumes). The two subjects performed 20 'viewRandom' runs. In 'viewFigure' runs, either geometric shape pattern (square, small frame, large frame, plus, X) or alphabet letter pattern (n, e, u, r, o) was presented in each trial. In addition, data while the subject viewed thin and large alphabet letter patterns (n, e, u, r, o) are included in the dataset (they are not included in the results of the original study). Each 'viewFigure' run consisted of 10 stimulus presentation trials and lasted for 268 s (134 volumes). The 'sub-01' and 'sub-02' performed 12 and 10 'viewFigure' runs, respectively.
To help subjects suppress eye blinks and firmly fixate the eyes, the color of the fixation spot changed from white to red 2 s before each stimulus block started. To ensure alertness, subjects were instructed to detect the color change of the fixation (red to green, 100 ms) that occurred after a random interval of 3–5 s from the beginning of each stimulus block. Performances of the subject was monitored online during experiments, but were not recorded and omitted from the dataset.
The value of trial_type in the task event files (*_events.tsv) indicates the type of each trial (block) as below.
rest: Rest trial (no visual stimulus).stimulus_random: Random pattern.stimulus_shape: Geometric shape pattern (square, small frame, large frame, plus, X).stimulus_alphabet: Alphabet pattern (n, e, u, r, o).stimulus_alphabet_thin: Thin alphabet pattern (n, e, u, r, o).stimulus_alphabet_long: Long alphabet pattern (n, e, u, r, o).Note that the results from thin and long alphabet patterns are not included in the original paper although the data were obtained in the same sessions.
Additional column stimulus_pattern contains the pattern of stimuli (12 × 12) presented in each stimulus trial. It is vectorized in row-major order. Each element in the vector corresponds to a patch (1.15° × 1.15°) in a stimulus pattern. 1 and 0 represnets a flickering checkerboard and a gray area, respectively. For example, stimulus pattern of
000000000000000000000000000000000000000111111000000111111000000110011000000110011000000110011000000110011000000000000000000000000000000000000000
represents the following stimulus.
000000000000
000000000000
000000000000
000111111000
000111111000
000110011000
000110011000
000110011000
000110011000
000000000000
000000000000
000000000000
The column holds 'null' for rest trials.
===========================================
Pydeface was used on all anatomical images to ensure de-identification of subjects. The code can be found at https://github.com/poldracklab/pydeface
MRIQC was run on the dataset. Results are located in derivatives/mriqc. Learn more about it here: https://mriqc.readthedocs.io/en/stable/
1) www.openfmri.org/dataset/ds******/ See the comments section at the bottom of the dataset page. 2) www.neurostars.org Please tag any discussion topics with the tags openfmri and dsXXXXXX. 3) Send an email to submissions@openfmri.org. Please include the accession number in your email.
-behavioral performance data is not accompanied this dataset as submitter didn't submit.
Facebook
Twitterhttps://pasteur.epa.gov/license/sciencehub-license.htmlhttps://pasteur.epa.gov/license/sciencehub-license.html
httk: High-Throughput Toxicokinetics
Functions and data tables for simulation and statistical analysis of chemical toxicokinetics ("TK") using data obtained from relatively high throughput, in vitro studies. Both physiologically-based ("PBTK") and empirical (e.g., one compartment) "TK" models can be parameterized for several hundred chemicals and multiple species. These models are solved efficiently, often using compiled (C-based) code. A Monte Carlo sampler is included for simulating biological variability and measurement limitations. Functions are also provided for exporting "PBTK" models to "SBML" and "JARNAC" for use with other simulation software. These functions and data provide a set of tools for in vitro-in vivo extrapolation ("IVIVE") of high throughput screening data (e.g., ToxCast) to real-world exposures via reverse dosimetry (also known as "RTK").
This dataset is associated with the following publication: Ring, C., R. Pearce, W. Setzer, B. Wetmore, and J. Wambaugh. (Environment International) Refining high-throughput prioritization of environmental chemicals to include inter-individual variability across subpopulations. ENVIRONMENT INTERNATIONAL. Elsevier Science Ltd, New York, NY, USA, 106: 105-118, (2017).
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Amblyopia is a developmental visual disorder that causes substantial visual deficits. Studies using resting-state functional magnetic resonance imaging (rs-fMRI) have disclosed abnormal brain functional connectivity (FC) both across long-range cortical sites and within the visual cortex in amblyopes, which is considered to be related to impaired visual functions. However, little work has examined whether restoring the vision of amblyopes accompanies with an improvement of FC. Here in adult amblyopes and healthy subjects, we compared their brain FC before and after an altered-reality adaptation training.16 amblyopia patients and 14 healthy subjects participated in this study. Due to a scanner malfunction, the data of one patient were excluded. All participants finished 6 daily sessions of complementary patchwork adaptation using altered-reality (Bao, Dong, Liu, Engel, & Jiang, 2018). Their visual acuity (measured with ETDRS charts) and rs-fMRI data were acquired before (pre-test), one day after (post-test), and one month after (post 1 month, patients only) the training. We assessed how the training would affect the voxel-wise FC in early visual areas (V1-V3) and searched for any altered FC between visual areas and other brain networks.The participants were scanned with a Siemens 3T Magnetom Trio scanner, using a 20-channel phased array head coil. High resolution T1 weighted anatomical images were acquired at the beginning of each session (176 interleaved sagittal slices, repetition time (TR) = 2600 ms, echo time (TE) = 3.02 ms, flip angle = 8°, field of view (FOV) = 256 mm, voxel resolution = 1.0 mm × 1.0 mm × 1.0 mm). Resting state data were obtained after the anatomical scan with T2* weighted Echo-planar imaging (EPI) (TR = 2000 ms, TE = 30 ms, flip angle = 90°, 32 axial slices, FOV = 200 mm, voxel resolution = 3.1 mm × 3.1 mm × 3.5 mm). The rs-fMRI run consisted of 300 whole-brain volumes. Participants fixed on a central white cross on the gray background during the scanning.The rs-fMRI data were analyzed using AFNI software package (https://afni.nimh.nih.gov). Visual areas, including V1, V2 and V3, were identified for each participant based on anatomical templates provided by Benson (https://hub.docker.com/r/nben/occipital_atlas/) (Benson et al., 2018) on the surface of each quadrant. The analysis of FC was performed using customized MATLAB and Python codes.‘amblyopia_IMAdata.zip’ and ‘normal_IMAdata.zip’ contain the raw MRI data with the format of .IMA.‘Amblyopia_fMRIdata.zip’, ‘Normal_fMRIdata.zip’ and ‘analysis scripts.zip’ are the .gz data and scripts we used to acquire the results in the paper (Dong, X., Liu, L., Du, X., Wang, Y., Zhang, P., Li, Z., & Bao, M. (2023). Treating amblyopia using altered reality enhances the fine-scale functional correlations in early visual areas. Human Brain Mapping, 1–12. https://doi.org/10.1002/hbm.26526).
Facebook
TwitterU.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
This U.S. Geological Survey (USGS) data release includes whole rock geochemical and isotopic data, and uranium-lead isotopic data collected by both Laser Ablation Inductively Coupled Plasma Mass Spectrometry (LA-ICP-MS) and Sensitive High Resolution Ion Microprobe-Reverse Geometry (SHRIMP-RG) methods for rocks in Colorado, Wyoming, Utah, and New Mexico.
Facebook
TwitterDianyuea turbinata_matKDianyuea turbinata maturase K (matK) gene, partial cds; chloroplastDianyuea turbinata_rbcLDianyuea turbinata ribulose-1,5-bisphosphate carboxylase/oxygenase large subunit (rbcL) gene, partial cds; chloroplastDianyuea turbinata_trnH-LDianyuea turbinata trnL-trnF intergenic spacer, partial sequence; chloroplastMATK-F.1392427.C05MatK original sequence data, forwardMATK-R.1392428.C06MatK original sequence data, reverseRBCL-F.1392429.C07rbcL original sequence data, forwardRBCL-R.1392430.C08MatK original sequence data, reverseTRNL-F.YP01513957.E05trnL-trnF original sequence data, forwardTRNL-R.YP01513958.E06trnL-trnF original sequence data, reverse
Facebook
TwitterDataset includes phytoplankton data collected during cruise 22 of R/V Skif (February - March 1989) in the Indian sector of the Southern Ocean. AccConID=21 AccConstrDescription=This license lets others distribute, remix, tweak, and build upon your work, even commercially, as long as they credit you for the original creation. This is the most accommodating of licenses offered. Recommended for maximum dissemination and use of licensed materials. AccConstrDisplay=This dataset is licensed under a Creative Commons Attribution 4.0 International License. AccConstrEN=Attribution (CC BY) AccessConstraint=Attribution (CC BY) AccessConstraints=None Acronym=None added_date=2013-12-09 16:37:38.127000 BrackishFlag=0 CDate=2013-08-14 cdm_data_type=Other CheckedFlag=0 Citation=Bryantseva Yu. (1989). Phytoplankton data collected during cruise 24 of R/V Skif (February - March 1989) in the Indian sector of the Southern Ocean. Dataset published in electronic format by IBSS in 2013, consulted via iOBIS on [date]. Comments=None ContactEmail=None Conventions=COARDS, CF-1.6, ACDD-1.3 CurrencyDate=None DasID=4359 DasOrigin=Research: field survey DasType=Data DasTypeID=1 DateLastModified={'date': '2024-06-29 01:33:58.600000', 'timezone_type': 1, 'timezone': '+00:00'} DescrCompFlag=0 DescrTransFlag=0 Easternmost_Easting=75.0 EmbargoDate=None EngAbstract=Dataset includes phytoplankton data collected during cruise 22 of R/V Skif (February - March 1989) in the Indian sector of the Southern Ocean. EngDescr=Samples were taken by 7 L Niskin bathometer at standard depths: 0, 10, 25, 50 and 100m. Samples were concentrated by reverse filtration through nucleoporous (nuclear) filters with 1 µm size pores. Fixation - 40% Formalgedid or Glutar. Cell counts were done with a light microscope (magnification 100, 200, 400 x). In addition samples were taken by 7 L Niskin bathometer from depths where irradiance level = 100, 46, 25, 10 and 1% and were processed without concentration in drop 0.1-0.5 ml (1-3 replications). Individual biovolumes and biomass (wet weight) were calculated via geometric approximations.
At IBSS, the data were digitized from paper processing books, quality-controlled and transformed to comply with the Darwin Core/OBIS Schema. Funds for data operations were kindly provided by the Census of Marine Life International Cosmos Prize Fund, through a grant to Rutgers University. FreshFlag=0 geospatial_lat_max=-60.0 geospatial_lat_min=-67.0 geospatial_lat_units=degrees_north geospatial_lon_max=75.0 geospatial_lon_min=60.0 geospatial_lon_units=degrees_east infoUrl=None institution=UkrSCES, NASU-IBSS License=https://creativecommons.org/licenses/by/4.0/ Lineage=None MarineFlag=1 modified_sync=2021-02-05 00:00:00 Northernmost_Northing=-60.0 OrigAbstract=None OrigDescr=None OrigDescrLang=None OrigDescrLangNL=None OrigLangCode=None OrigLangCodeExtended=None OrigLangID=None OrigTitle=None OrigTitleLang=None OrigTitleLangCode=None OrigTitleLangID=None OrigTitleLangNL=None Progress=Completed PublicFlag=1 ReleaseDate=None ReleaseDate0=None RevisionDate=None SizeReference=2414 distribution records sourceUrl=(local files) Southernmost_Northing=-67.0 standard_name_vocabulary=CF Standard Name Table v70 StandardTitle=Phytoplankton data collected during cruise 24 of R/V Skif (February - March 1989) in the Indian sector of the Southern Ocean StatusID=1 subsetVariables=ScientificName,BasisOfRecord,YearCollected,MonthCollected,DayCollected,aphia_id TerrestrialFlag=0 time_coverage_end=1989-03-06T07:30:00Z time_coverage_start=1989-02-16T15:30:00Z UDate=2022-08-01 VersionDate=None VersionDay=None VersionMonth=None VersionName=None VersionYear=None VlizCoreFlag=1 Westernmost_Easting=60.0
Facebook
TwitterThis data collection consists of archived GOES-R Series Magnetometer (MAG) and Goddard Magnetometer (GMAG) Level 0 data from the operational GOES-East and GOES-West satellites. The Geostationary Operational Environmental Satellite-R (GOES-R) series provides continuity of the GOES mission through 2035 and improvements in geostationary satellite observational data. GOES-16, the first GOES-R satellite, began operating as GOES-East on December 18, 2017, and GOES-18 began operating on March 1, 2022 replacing GOES-17 as GOES West in early January 2023. GOES-19 began operational service April 7, 2024, replacing GOES-16. MAG measures the magnetic field in the outer portion of the magnetosphere to detect charged particles that can be dangerous to spacecraft and human spaceflight. The Magnetometer (MAG) Level 0 product contains Consultative Committee for Space Data Systems (CCSDS) science, engineering, and diagnostic telemetry data packets received from MAG. The Level 0 data files also contain orbit and attitude, and eclipse of the sun related and yaw flip state telemetry data packets generated by the GOES spacecraft. Each CCSDS packet contains a unique Application Process Identifier (APID) in the primary header that identifies the specific type of packet, and is used to support interpretation of its contents. Users may refer to the GOES-R Series Product Definition and Users’ Guide (PUG) Volumes 1 (Main) and 2 (Level 0 Products) for Level 0 data documentation. Related instrument calibration data and Level 1b processing information are archived and available for order at the NOAA CLASS website. The MAG Level 0 data files are delivered in a netCDF-4 file format, however, the constituent CCSDS packets are stored in a byte array making the data opaque for standard netCDF reader applications. The MAG Level 0 data files are packaged in daily tar files (data bundles) by satellite for the archive. Recently ingested archive tar files are available for 14 days on a CLASS-hosted anonymous FTP server for users to download. Data archived on tape are available to users by special order through NCEI customer service. The GMAG, on board GOES-18 and later satellites, is an upgraded magnetometer instrument that offers improved measurements of Earth’s magnetic field over the magnetometers on GOES-16 and GOES-17.