Facebook
TwitterThis submission contains raw Load Cell data and processing scripts associated with MHKDR submission 394 (UNH TDP - Concurrent Measurements of Inflow, Power Performance, and Loads for a Grid-Synchronized Vertical Axis Cross-Flow Turbine Operating in a Tidal Estuary, DOI: 10.15473/1973860) from the University of New Hampshire and Atlantic Marine Energy Center (AMEC) turbine deployment platform. The user is directed to the MHKDR submission 394 for relevant context and detail of this deployment; see link below. The 394_READ_ME file here provides the description from that submission for quick reference. The READ_ME file for this specific instrument from the 394 submission is also available here. This submission contains a zipped folder structure containing raw data in its original format and MATLAB (2019a) processing scripts used to process and manipulate the data into its final form. The final data products are submitted in the 394 submission.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Raw data of absorption at 340 nm during the enzyme assays performed for fig 5 as well as calculations to generate specific activities (inpresence and absence of 100 µM fruc-1,6-BP) from raw data.
Facebook
TwitterOriginal Title: Composition of Raw, Processed, Prepared Foods USDA National Nutrient Database for Standard Reference, 27th Edition
The database consists of several datasets: food descriptions, nutrients, weights and measures, footnotes and data sources. The Nutrient Data file contains the average nutrient values per 100 grams of the edible portion of the food along with fields to further describe the average value. Information on household measures for food items is provided. Weight is given for food items without waste. Footnotes are provided for a few items where information on food descriptions, weights and measurements, or nutrient values is not available in the available contexts. Data were collected from published and unpublished sources. Published data sources include scientific literature. Unpublished data include information obtained from the food industry, other government agencies, and research conducted under contracts initiated by the USDA Agricultural Research Service (ARS). Updated data since 1992 have been published electronically on the USDA Nutrient Data Laboratory (NDL) website. Standard Reference (SR) 27 contains composite data for all food groups and nutrients published in 21 volumes of Handbook of Agriculture 8 (US Department of Agriculture). of Agriculture 1992–76), and its four supplements (US Department of Agriculture 1990–93), which replaced the 1963 version (Watt and Merrill, 1963). SR27 supersedes all previous versions, including printed versions, if there are any differences.
Source: https://catalog.data.gov/dataset/composition-of-foods-raw-processed-prepared-usda-national-nutrient-database-for-standard-r Last update at https://catalog.data.gov/organization/usda-gov: 2020-02-21
Food choices significantly affect global health, contributing to an estimated 11 million deaths from diseases such as diabetes, cancer, and cardiovascular disease.
The American Institute for Cancer Research emphasizes the potential reduction of neoplasms, or abnormal tissue growth, by changing lifestyle and dietary choices in developing countries. Over the past decade, there has been an increase in the consumption of processed and ultra-processed foods worldwide. This escalation underscores the critical need to investigate and raise awareness of the effects of these foods on overall health.
Definition of processed and ultra-processed foods Food processing involves techniques that modify raw foods for ease of storage and consumption. In contemporary classifications, foods are divided into four categories: unprocessed, processed culinary, processed foods, and ultra-processed foods (UPF). including cleaning, dehydrating, heating, juicing or freezing, apart from raw agricultural products. Examples include tofu (from raw soybeans), dry roasted almonds (from raw almonds), and canned vegetables (such as tomatoes, carrots). In contrast, ultra-processed foods undergo more processing and contain additional ingredients such as fats, salt, sugar, preservatives, artificial colors and flavors, such as processed meats, sugary drinks, packaged sweets. frozen snacks, instant soups and meals.
Dispelling beliefs about processed foods There is a common misconception that all processed foods have no nutritional value and are harmful. However, some processing methods can enhance or preserve food, such as fortifying food with essential nutrients such as iron, vitamins, and iodine to prevent deficiencies. In addition, processing methods such as cooking, drying, and pasteurization can prevent the growth of harmful bacteria, increase shelf life, and improve flavor and texture, making food easier to prepare.
Nutritional content and its effect Research shows that ultra-processed foods (UPFs) have poor nutritional value and contribute to severe conditions such as obesity. These foods are typically high in energy, refined carbohydrates, unhealthy fats, and sugar, but low in fiber, protein, minerals, and vitamins. Various studies have linked consumption of UPFs to increased risks of obesity, abdominal obesity, metabolic syndrome, high blood pressure, hypertension, type 2 diabetes, cardiovascular disease, cancer, and depression. These health risks are associated with poor nutritional content and high glycemic levels in UPFs.
Getting to know food labels Identifying processed and ultra-processed foods from food labels alone can be challenging, as specific processing techniques are often not disclosed. However, understanding some basic concepts of food development and processing can help. For example, fresh fruits and vegetables are not processed or ultra-processed, while cooking ingredients such as vegetable oils, sugars, and salts fall into the minimally processed category. Additionally, the number of ingredients listed on a food product can indicate its level of processing. Cosmetic additives listed at the end of the ingredie...
Facebook
TwitterOverview This dataset was produced from the raw sodar .txt files from the Colorado City, TX site during the WFIP1 campaign. Quality control and formatting have been applied to transform the numerous raw files into a single file to provide user friendliness and improved wind resource characterization at this location. Data Details Location: 32.47215, -100.92134 Elevation: 673 m Output heights: Every 10 meters from 30 meters to 200 meters Data Quality Data from the raw files were filtered according to the following automated and manual procedures. Missing and rejected values were flagged as -999. High precipitation events as suggested by the vertical velocity values were subjected to quality control. If any vertical velocity value at any height for a given timestamp fell below a -1.5 m/s threshold, all variables at all heights at that timestamp were rejected. On a height-by-height basis, if the signal-to- noise ratio (SNR) for any of the u, v, or w wind components reached 10 or below, all variables for that height and timestamp were rejected. The raw files were also screened for nonphysical values such as wind speeds less than zero and directions outside 0-360 degrees. Finally, the data were visually examined for events of atypical sodar retrievals, such as excessive magnitudes in oscillations or periods of stagnancy.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Over the course of 24 hours, we collected raw (Photoplethysmography (PPG), Acceleration, and Gyro) and processed (steps, calories, sleep, HR, HRV, SPO2, Respiratory Rate, R-R) data samples. Biostrap approaches health insights from a data-driven perspective. Our clinical-grade hardware enables users to accurately track SpO2, HRV, RHR, and a variety of other biometrics with confidence.
Facebook
TwitterOverview This dataset was produced from the raw sodar .dat files from the Cleburne, TX site during the WFIP1 campaign. Quality control and formatting have been applied to transform the numerous raw files into a single file to provide user friendliness and improved wind resource characterization at this location. Data Details Location: 32.35, -97.44 Elevation: 258 m Output heights: Every 10 meters from 30 meters to 200 meters Data Quality Data from the raw files were filtered according to the following automated and manual procedures. Missing and rejected values were flagged as -999. High precipitation events as suggested by the vertical velocity values were subjected to quality control. If any vertical velocity value at any height for a given timestamp fell below a -1.5 m/s threshold, all variables at all heights at that timestamp were rejected. The raw files were also screened for nonphysical values such as wind speeds less than zero and directions outside 0-360 degrees. Finally, the data were visually examined for events of atypical sodar retrievals, such as excessive magnitudes in oscillations or periods of stagnancy.
Facebook
Twitterhttps://cdla.io/sharing-1-0/https://cdla.io/sharing-1-0/
Sparse matrices for raw counts data for open-problems-multimodal competition. The script for generating sparse matrices was shared by Wei Xie and can be found here.
The similar dataset for normalized and log1p transformed counts for the same cells can be found here.
Each h5 file contains 5 arrays:
axis0 (row index from the original h5 file)
axis1 (column index from the original h5 file)
value_i (attribute i in dgCMatrix in R or index indices in csc_array in scipy.sparse)
value_p (attribute p in dgCMatrix in R or index indptr in csc_array in scipy.sparse)
value_x (attribute x in dgcMatrix in R or index data in csc_array in scipy.sparse.)
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This submission contains raw Shark Power Meter data and processing scripts associated with MHKDR submission 394 (UNH TDP - Concurrent Measurements of Inflow, Power Performance, and Loads for a Grid-Synchronized Vertical Axis Cross-Flow Turbine Operating in a Tidal Estuary, DOI: 10.15473/1973860) from the University of New Hampshire and Atlantic Marine Energy Center (AMEC) turbine deployment platform.
The user is directed to the MHKDR submission 394 for relevant context and detail of this deployment; see link below. The 394_READ_ME file here provides the description from that submission for quick reference.
The READ_ME file for this specific instrument from the 394 submission is also available here.
This submission contains a zipped folder structure containing raw data in its original format and MATLAB (2019a) processing scripts used to process and manipulate the data into its final form. The final data products are submitted in the 394 submission.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Raw data file for article:
Precision of the Integrated Cognitive Assessment for the assessment of neurocognitive performance in athletes at risk of concussion
Facebook
Twitterhttps://spdx.org/licenses/etalab-2.0.htmlhttps://spdx.org/licenses/etalab-2.0.html
These data are associated with the results presented in the paper: Compacting an assembly of soft balls far beyond the jammed state: insights from 3D imaging They are scans of compressed millimetric silicon balls. Micro glass beads are trapped in the silicone so that 3D DIC can be performed. Post-processed data including displacement fields, strain fields, contacts to name a few are also available. More details about the experimental protocol and data post-processing can be found in this publication. How data are sorted The raw and post-processed data of 4 experiments are available here. For each experiment: - a 'scan' folder includes 'scan_XX' folders where 'XX' corresponds to the N compression steps. Inside each of these folders you can find 8-bit png pictures corresponding to the vertical slices of the density matrix of a given compression step. Not interesting slices, because particles are not seeable have been removed for the sake of saving space. - a 'result' folder contains all the data post-processed from the density images. More specifically: - 'pressure_kpa.txt' is a N vector giving the evolution of the applied pressure (in kPa) on the loading piston, where N is number of loading steps. - 'particle_number.txt' is a n vector telling to which particle a correlation cell belongs. n is the number of correlation cells. - 'particle_size.txt' is a m vector where m is the number of particle in the system. It gives the particle size : 1 for large particles, 0 for small ones. Particle numbering corresponds with 'particle_number.txt' - Following text files are Nxn matrices where N is the number of steps and n is the number of correlations cells. They give for each correlation cell, the evolution of an observable measured in the corresponding volume of the correlation cell: - 'position_i.txt' is the position of the cell along i axis - 'position_j.txt' is the position of the cell along j axis - 'position_k.txt' is the position of the cell along k axis - 'correlation.txt' is the evolution of the correlation value when performing the 3D DIC. This constitutes the goodness of measurement of the correlation cell positions - 'dgt_Fij.txt' is the evolution of the deformation gradient tensor for each of its ij components - 'energy.txt' is the evolution of the energy density stored in the material - 'no_outlier_energy.txt' is a boolean ginving, from the energy density measurement, if the observables can be considered as an outlier (0 value) or not (1 value) - Following text files are mxN matrices with self-expicit contents where N is the number of loading steps and m the number of grains (particle numbering corresponds with 'particle_number.txt'). They give for each grain, the evolution of an observable measured at the grain scale. The major direction is the direction in which the particle is the longest. The minor direction is the direction in which the particle is the shortest. Theta and phi are the azimutal and elevation angle respectively: - 'particle_asphericity.txt' - 'particle_area.txt' - 'minor_direction_theta.txt' - 'minor_direction_phi.txt' - 'minor_direction_length.txt' - 'major_direction_theta.txt' - 'major_direction_phi.txt' - 'major_direction_length.txt' - Following text files are N vectors with self-expicit contents where N is the number of loading steps. They give the evolution of a system observable during loading. If a second vector is given it is the evolution of the standard deviation of the observable. In the case of contacts 'proximity' et for contacts obtained only from proximity criterion and 'density' is for contacts obtained from scanner density criterion. 'std' stand for standard deviation: - 'global strain.txt' measured from the system boundaries evolution - 'packing_fraction.txt' measured from the system boundaries and particle volume evolution - 'average_contact_surface_proximity.txt' - 'average_contact_surface_density.txt' - 'average_contact_radius_proximity.txt' - 'average_contact_densitt_proximity.txt' - 'average_contact_outofplane_proximity.txt' - 'average_contact_direction_proximity.txt' - 'average_contact_direction_density.txt' - 'average_contact_asphericity_proximity.txt' - 'average_contact_asphericity_density.txt' - 'average_vonMises_strain.txt' - 'std_vonMises_strain.txt' - 'average contact direction_density.txt' - 'average_energy.txt' - 'std_energy.txt' - 'contact_proximity.txt' number of contact - 'contact_density.txt' number of contact - a 'contact_density' folder includes 'XX' folders corresponing to the N compression steps. Each of these 'XX' folders includes 'ijkP_AA_BB.txt' files which gives information about potential contact points between AA and BB grains. For each potential contact, 'ijkP_AA_BB.txt' gives the i,j and k position of the potential contact points in AA and the average local density value associated which gives the probability of contact. - a 'contact_proximity' folder includes 'XX' folders corresponing to the N compression steps. Each of these 'XX'...
Facebook
TwitterU.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
This data set contains archival raw, partially processed, and ancillary/supporting radio science data acquired during the Mapping (MAP) phase of the Mars Global Surveyor (MGS) mission. The radio observations were carried out using the MGS spacecraft and Earth-based receiving stations of the NASA Deep Space Network (DSN). The observations were designed to test the spacecraft radio system, the DSN ground system, and MGS operations procedures; to be used in generating high-resolution gravity field models of Mars; and for estimating density and structure of the Mars atmosphere. A small number of surface scattering experiments were also conducted. Of most interest are likely to be the Orbit Data File and Original Data Record files, in the ODF and ODR directories, respectively, which provided the raw input to gravity and atmospheric investigations. The MAP phase extended from March 1999 through January 2001. Data were organized in approximately chronological order and delivered on a set of 184 CD volumes at the rate of 2-3 CD's per week. Typical volume of a one-day ODF was 300-400 kB. Typical volume of an ODR was 5-10 MB, and there were typically 8-16 ODR's per day depending on DSN schedules and observing geometry.
Facebook
Twitterhttps://academictorrents.com/nolicensespecifiedhttps://academictorrents.com/nolicensespecified
Raw and processed data used in the paper entitled, Deep learning-based approach for high spatial resolution fiber shape sensing, published in the Journal of Communications Engineering. The code for processing the raw data is available in the link provided under the “code availability” section of the paper. Alternative download link:
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This metadata record provides details of the raw data produced from the SHARE-seq experiments for the HDMA study (Liu et al. bioRxiv 2025).
De-identified tissue samples were collected at Stanford University School of Medicine from elective termination of pregnancy procedures with informed consent for the research use of tissues in observance of relevant legal and institutional ethical regulations. SHARE-seq was performed on isolated nuclei (Methods, Note S1).
In total, N=76 samples were profiled, from across 10-23 post-conception weeks, and covering a total of 12 tissues. The full list of samples, along with experimental batch, age, and sex, is provided in Supplementary Table 1.
All DNA libraries were sequenced on a NovaSeq 6000 using 300-cycle S4 v1.5 reagent kits with XP workflow. Paired-end sequencing was run with a 96-99-8-96 configuration (Read1-Index1-Index2-Read2). Sequencing was performed at the Stanford Genome Technology Center.
We developed a highly parallelized, rapid, and storage-efficient pre-processing pipeline to convert BCL files from sequencers to ATAC fragment files and RNA sparse matrices (Fig. S1, Methods, and available in full at https://github.com/GreenleafLab/shareseq-pipeline (stable release v1.0.0).
The raw data is in the form of FASTQs pairs per sample per data modality. Please contact Dr. William Greenleaf (wjg@stanford.edu) regarding access to raw data.
Processed data in the form of fragments per sample (ATAC modality) and gene expression count matrices (RNA modality) are provided. The full list of datasets deposited is provided in Supplementary Table 14.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Raw data, processed data, and R code for "Statistical Learning Shapes Face Evaluations"
Facebook
Twitterhttps://ega-archive.org/dacs/EGAC50000000693https://ega-archive.org/dacs/EGAC50000000693
We profile the whole-transcriptome (bulk RNAseq) of 7 patient-derived Sézary Syndrome (SS) cells to identify expression patterns, functional programs and expressed gene mutations that may provide clues on new therapeutic options for SS patients. The libraries were sequenced on NextSeq500 (Illumina) with a paired-end read length of 2x75bp. Raw data (FASTQ) and obtained processed data (VCF) including all called raw variants are available.
Facebook
TwitterThis submission contains raw Voltsys rectifier data and processing scripts associated with MHKDR submission 394 (UNH TDP - Concurrent Measurements of Inflow, Power Performance, and Loads for a Grid-Synchronized Vertical Axis Cross-Flow Turbine Operating in a Tidal Estuary, DOI: 10.15473/1973860) from the University of New Hampshire and Atlantic Marine Energy Center (AMEC) turbine deployment platform. The user is directed to the MHKDR submission 394 for relevant context and detail of this deployment; see link below. The 394_READ_ME file here provides the description from that submission for quick reference. The READ_ME file for this specific instrument from the 394 submission is also available here. This submission contains a zipped folder structure containing raw data in its original format and MATLAB (2019a) processing scripts used to process and manipulate the data into its final form. The final data products are submitted in the 394 submission.
Facebook
TwitterU.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
The Magellan (MGN) Radio Occultation (ROCC) Raw Data Archive (RDA) is a time-ordered collection of raw and partially processed data from radio occultation experiments conducted using the Magellan spacecraft while it orbited Venus.
Facebook
TwitterEvery laboratory performing mass spectrometry based proteomics strives to generate high quality data. Among the many factors that influence the outcome of any experiment in proteomics is performance of the LC-MS system, which should be monitored continuously. This process is termed quality control (QC). We present an easy to use, rapid tool, which produces a visual, HTML based report that includes the key parameters needed to monitor LC-MS system perfromance. The tool, named RawBeans, can generate a report for individual files, or for a set of samples from a whole experiment. We anticipate it will help proteomics users and experts evaluate raw data quality, independent of data processing. The tool is available here: https://bitbucket.org/incpm/prot-qc/downloads.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This data set follows from the "weight.csv" data set. Primarily based on the work by Sarria, as shown in the Sarria workbook https://www.sciencedirect.com/science/article/pii/S0168169909000283 for aphids, the new variables examine durations, frequencies, and the time to specific behavioral events. These values are calculated and saved by the SAS program, and afterwards the insect size data were added to complete the dataset. This intermediate step was necessary because the size data needed to be included.
Facebook
TwitterThis data set contains archival raw, partially processed, and ancillary/supporting radio science data acquired during the Extended Mission (EXT) phase of the Mars Global Surveyor (MGS) mission. The radio observations were carried out using the MGS spacecraft and Earth-based receiving stations of the NASA Deep Space Network (DSN). The observations were designed to test the spacecraft radio system, the DSN ground system, and MGS operations procedures; to be used in generating high-resolution gravity field models of Mars; and for estimating density and structure of the Mars atmosphere. Of most interest are likely to be the Orbit Data File and Radio Science Receiver files, in the ODF and RSR directories, respectively, which provided the raw input to gravity and atmospheric investigations. The EXT phase began on 1 February 2001. Data were organized in approximately chronological order and delivered on a set of several hundred CD-WO volumes at the rate of 2-3 CD's per week. Typical volume of a one-day ODF was 300-400 kB. Typical volume of an RSR ranged from 5 to 10 MB, and there could be 0-30 RSR's per day depending on DSN schedules and observing geometry.
Facebook
TwitterThis submission contains raw Load Cell data and processing scripts associated with MHKDR submission 394 (UNH TDP - Concurrent Measurements of Inflow, Power Performance, and Loads for a Grid-Synchronized Vertical Axis Cross-Flow Turbine Operating in a Tidal Estuary, DOI: 10.15473/1973860) from the University of New Hampshire and Atlantic Marine Energy Center (AMEC) turbine deployment platform. The user is directed to the MHKDR submission 394 for relevant context and detail of this deployment; see link below. The 394_READ_ME file here provides the description from that submission for quick reference. The READ_ME file for this specific instrument from the 394 submission is also available here. This submission contains a zipped folder structure containing raw data in its original format and MATLAB (2019a) processing scripts used to process and manipulate the data into its final form. The final data products are submitted in the 394 submission.