Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
a, Frequently, variation in data from across the sciences is characterized with the arithmetic mean and the standard deviation SD. Often, it is evident from the numbers that the data have to be skewed. This becomes clear if the lower end of the 95% interval of normal variation, - 2 SD, extends below zero, thus failing the “95% range check”, as is the case for all cited examples. Values in bold contradict the positive nature of the data. b, More often, variation is described with the standard error of the mean, SEM (SD = SEM · √n, with n = sample size). Such distributions are often even more skewed, and their original characterization as being symmetric is even more misleading. Original values are given in italics (°estimated from graphs). Most often, each reference cited contains several examples, in addition to the case(s) considered here. Table 2 collects further examples.
Facebook
TwitterDWR has a long history of studying and characterizing California’s groundwater aquifers as a part of California’s Groundwater (Bulletin 118). California's Groundwater Basin Characterization Program provides the latest data and information about California’s groundwater basins to help local communities better understand their aquifer systems and support local and statewide groundwater management.
Under the Basin Characterization Program, new and existing data (AEM, lithology logs, geophysical logs, etc.) are integrated to create continuous maps and three-dimensional models. To support this effort, new data analysis tools have been developed to create texture models, hydrostratigraphic models, and aquifer flow parameters. Data collection efforts have been expanded to include advanced geologic, hydrogeologic, and geophysical data collection and data digitization and quality control efforts will continue. To continue to support data access and data equity, the Basin Characterization Program has developed new online, GIS-based, visualization tools to serve as a central hub for accessing and exploring groundwater related data in California.
Additional information can be found on the Basin Characterization Program webpage.
DWR is undertaking local, regional, and statewide investigations to evaluate California's groundwater resources and develop state-stewarded maps and models. New and existing data have been combined and integrated using the analysis tools described below to develop maps and models that describe grain size, the hydrostratigraphic properties, and hydrogeologic conceptual properties of California’s aquifers. These maps and models help groundwater managers understand how groundwater is stored and moves within the aquifer. The models will be state-stewarded, meaning that they will be regularly updated, as new data becomes available, to ensure that up-to-date information is used for groundwater management activities. The first iterations of the following maps and models will be published as they are developed:
Click on the link below for each local, regional, or statewide investigation to find the following datasets.
As a part of the Basin Characterization Program, advanced geologic, hydrogeologic, and geophysical data will be collected to improve our understanding of groundwater basins. Data collected under Basin Characterization are collected at a local, regional, or statewide scale depending on the scope of the study. Advanced data collection methods include:
Lithology and geophysical logging data have been digitized to support the Statewide AEM Survey Project and will continue to be digitized to support Basin Characterization efforts. All digitized lithology logs with Well Completion Report IDs will be imported back into the OSWCR database. Digitized lithology and geophysical logging can be found under the following resource:
To develop the state-stewarded maps and models outlined above, new tools and process documents have been created to integrate and analyze a wide range of data, including geologic, geophysical, and hydrogeologic information. By combining and assessing various datasets, these tools help create a more complete picture of California's groundwater basins. All tools, along with guidance documents, are made publicly available for local groundwater managers to use to support development of maps and models at a local scale. All tools and guidance will be updated as revisions to tools and process documents are made.
Data2Texture: Data2Texture is an advanced spatial data interpolation tool for estimating the distribution of sediment textures from airborne electromagnetic data and lithology logs to create a 3D texture model
Data2HSM - Smart Interpretation: Data2HSM via Smart Interpretation (SI) is a semi-automatic Python tool for delineating continuous hydrogeologic surfaces from airborne electromagnetic data products.
Data2HSM - Gaussian Mixture Model: The Data2HSM via Gaussian Mixture Model tool ingests the AEM data and groups the data into a user-specified number of clusters that are interpreted as stratigraphic units in the hydrostratigraphic model (HSM)
Data2HSM - Geological Pseudolabel Deep Neural Network: The GeoPDNN (Geological Pseudolabel Deep Neural Network) is a semi-supervised machine learning tool that integrates lithologic well logs and AEM data into plausible stratigraphic surfaces.
Texture2Par V2: Texture2Par V2 is a groundwater model pre-processor and parameterization utility developed to work with the IWFM and MODFLOW families of hydrologic simulation code.
Data access equity is a priority for the Basin Characterization Program. To ensure data access equity, the Basin Characterization Program has developed applications and tools to allow data to be visualized without needing access to expensive data visualization software. This list below provides links and descriptions for the Basin Characterization's suite of data viewers.
SGMA Data Viewer: Basin Characterization tab: Provides maps, depth slices, and profiles of Basin Characterization maps, models, and datasets, including the following:
3D AEM Data Viewer: Displays the Statewide AEM Survey electrical resistivity and coarse fraction data,
Facebook
TwitterThe National Cooperative Soil Survey - Soil Characterization Database (NCSS-SCD) contains laboratory data for more than 65,000 locations (i.e. XY coordinates) throughout the United States and its Territories, and about 2,100 locations from other countries. It is a compilation of data from the Kellogg Soil Survey Laboratory (KSSL) and several cooperating laboratories. The data steward and distributor is the National Soil Survey Center (NSSC). Information contained within the database includes physical, chemical, biological, mineralogical, morphological, and mid infrared reflectance (MIR) soil measurements, as well a collection of calculated values. The intended use of the data is to support interpretations related to soil use and management.Data Usage Access to the data is provided via the following user interfaces:1. Interactive Web Map2. Lab Data Mart (LDM) interface for querying data and generating reports3. Soil Data Access (SDA) web services for querying data4. Direct download of the entire database in several formats.Data at each location includes measurements at multiple depths (e.g. soil horizons). However, not all analyses have been conducted for each location and depth. Typically, a suite of measurements was collected based upon assumed or known conditions regarding the soil being analyzed. For example, soils of arid environments are routinely analyzed for salts and carbonates as part of the standard analysis suite. Standard morphological soil descriptions are available for about 60,000 of these locations. Mid-infrared (MIR) spectroscopy is available for about 7,000 locations. Soil fertility measurements, such as those made by Agricultural Experiment Stations, were not made. Most of the data were obtained over the last 40 years, with about 4,000 locations before 1960, 25,000 from 1960-1990, 27,000 from 1990-2010, and 13,000 from 2010 to 2021. Generally, the number of measurements recorded per location has increased over time. Typically, the data were collected to represent a soil series or map unit component concept. They may also have been sampled to determine the range of variation within a given landscape.Individual Metadata [XML]
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Files contain 5000 samples of AWARE characterization factors, as well as sampled independent data used in their calculations and selected intermediate results.
AWARE is a consensus-based method development to assess water use in LCA. It was developed by the WULCA UNEP/SETAC working group. Its characterization factors represent the relative Available WAter REmaining per area in a watershed, after the demand of humans and aquatic ecosystems has been met. It assesses the potential of water deprivation, to either humans or ecosystems, building on the assumption that the less water remaining available per area, the more likely another user will be deprived.
The code used to generate the samples can be found here: https://github.com/PascalLesage/aware_cf_calculator/
Samples were updated from v1.0 in 2020 to include model uncertainty associated with the choice of WaterGap as the global hydrological model (GHM).
The following datasets are supplied:
1) AWARE_characterization_factor_samples.zip
Actual characterization factors resulting from the Monte Carlo Simulation. Contains 4 zip files:
* monthly_cf.zip: contains 116,484 arrays of 5000 monthly characterization factor samples for each of 9707 watershed and for each month, in csv format. Names are cf_.csv, where is the watershed id and is the first three letters of the month ('jan', 'feb', etc.).
* average_agri_cf.zip: contains 9707 arrays of 5000 annual average, agricultural use, characterization factor samples for each watershed, in csv format. Names are cf_average_agri_.csv.
* average_non_agri_cf.zip: contains 9707 arrays of 5000 annual average, non-agricultural use, characterization factor samples for each watershed, in csv format. Names are cf_average_non_agri_.csv.
* average_unknown_cf.zip: contains 9707 arrays of 5000 annual average, unspecified use, characterization factor samples for each watershed, in csv format. Names are cf_average_unknown_.csv..
2) AWARE_base_data.xlsx
Excel file with the deterministic data, per watershed and per month, for each of the independent variables used in the calculation of AWARE characterization factors. Specifically, it includes:
Monthly irrigation
Description: irrigation water, per month, per basin
Unit: m3/month
Location in Excel doc: Irrigation
File name once imported: irrigation.pickle
table shape: (11050, 12)
Non-irrigation hwc: electricity, domestic, livestock, manufacturing
Description: non-irrigation uses of water
Unit: m3/year
Location in Excel doc: hwc_non_irrigation
File name once imported: electricity.pickle, domestic.pickle,
livestock.pickle, manufacturing.pickle
table shape: 3 x (11050,)
avail_delta
Description: Difference between "pristine" natural availability
reported in PastorXNatAvail and natural availability calculated
from "Actual availability as received from WaterGap - after
human consumption" (Avail!W:AH) plus HWC.
This should be added to calculated water availability to
get the water availability used for the calculation of EWR
Unit: m3/month
Location in Excel doc: avail_delta
File name once imported: avail_delta.pickle
table shape: (11050, 12)
avail_net
Description: Actual availability as received from WaterGap - after human consumption
Unit: m3/month
Location in Excel doc: avail_net
File name once imported: avail_net.pickle
table shape: (11050, 12)
pastor
Description: fraction of PRISTINE water availability that should be reserved for environment
Unit: unitless
Location in Excel doc: pastor
File name once imported: pastor.pickle
table shape: (11050, 12)
area
Description: area
Unit: m2
Location in Excel doc: area
File name once imported: area.pickle
table shape: (11050,)
It also includes:
information (k values) on the distributions used for each variable (uncertainty tab)
information (k values) on the model uncertainty (model uncertainty tab)
two filters used to exclude watersheds that are either in Greenland (polar filter) or without data from the Pastor et al. (2014) method (122 cells), representing small coastal cells with no direct overlap (pastor filter). (filters tab)
3) independent_variable_samples.zip
Samples for each of the independent variables used in the calculation of characterization factors. Only random variables are contained. For all watershed or watershed-months without samples, the Monte Carlo simulation used the deterministic values found in the AWARE_base_data.xlsx file.
The files are in csv format. The first column contains the watershed id (BAS34S_ID) if the data is annual or the (BAS34S_ID, month) for data with a monthly resolution. the other 5000 columns contain the sampled data.
The names of the files are .
4) intermediate_variables.zip
Contains results of intermediate calculations, used in the calculation of characterization factors. The zip file contains 3 zip files:
* AMD_world_over_AMD_i.zip: contains 116,484 arrays (for each watershed-month) of 5000 calculated values of the ratio between the AMD (Availability Minus Demand) for the watershed-month and AMD_glo, the world weighted AMD average. Format is csv.
* AMD_world.zip: contains one array of 5000 calculated values of the world average AMD. Format is csv.
* HWC.zip: contains 116,484 arrays (for each watershed-month) of 5000 calculated values of the total Human Water Consumption. Format is csv.
5) watershedBAS34S_ID.zip
Contains the GIS files to link the watershed ids (BAS34S_ID) to actual spatial data.
Facebook
Twitter{"Abstract Background To reduce the effects of climate change, the current fossil-based energy system must transition to a low-carbon system based largely on renewables. In both academic literature and non-academic discourse concerning the energy transition, resilience is frequently mentioned as an additional objective or requirement. Despite its frequent use, resilience is a very malleable term with different meanings in different contexts. Main text This paper seeks to identify how resilience is understood in the field of the energy system and whether there are similar aspects in the different ways the term is understood. To this end, we review more than 130 papers for definitions of energy system resilience. In addition, we use different aspects to categorize and examine these. The results paint a diverse picture in terms of the definition and understanding of resilience in the energy system. However, a few definition archetypes can be identified. The first uses a straightforward approach, in which the energy system has one clearly defined equilibrium state. Here, resilience is defined in relation to the response of the energy system to a disturbance and its ability to quickly return to its equilibrium. The second type of resilience allows for different equilibriums, to which a resilient energy system can move after a disruption. Another type of resilience focuses more on the process and the actions of the system in response to disruption. Here, resilience is defined as the ability of the system to adapt and change. In the papers reviewed, we find that the operational definition of resilience often encompasses aspects of different archetypes. This diversity shows that resilience is a versatile concept with different elements. Conclusions With this paper, we aim to provide insight into how the understanding of resilience for the energy system differs depending on which aspect of the energy system is studied, and which elements might be necessary for different understandings of resilience. We conclude by providing information and recommendations on the potential usage of the term energy system resilience based on our lessons learned."}
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
All Raw and Processed Data + written Thesis. Data and Figures are stored in the 'Figures_and_Data' Directory. Experimental Measurements were done by means of BLS Microscopy (group of H. Schultheiß at HZDR). Micromagnetic Simulations were done at the Hemera Cluster (Dr. A. Kakay at HZDR). Data Analysis was done in Python or Jupyter Notebooks (Open Source). All scripts are included. Graphics were done using OmniGraffle and Blender. Plotting was done using Python and 'Plot2' (Mac Only!). All Files/Data/Skripts are sorted by Figure! The entire Latex Package is stored under 'Thesis_Hula' - Dissertation.tex is the main file and shows all required dependencies.
Facebook
TwitterABSTRACT Purpose: to characterize the performance of Brazilian adolescents in the Pitch Pattern Sequence (PPS) test and compare the results with Auditec® normative values. Methods: 26 adolescents enrolled in elementary or secondary education, of both sexes, and between 12 and 18 years, participated in the study. The inclusion criteria adopted were: a) no alterations in the visual inspection of the external acoustic meatus; b) hearing thresholds within the normal range for both ears, that is, values equal to or lower than 25dBHL; c) bilateral type “A” tympanometric curve, d) presence of acoustic reflex, contralateral mode, in the frequencies of 500, 1000 and 2000Hz, in both ears, typical auditory behavior according to the Scale of Auditory Behaviors (SAB) or greater than 46 points. For adolescents, who met the inclusion criteria, the PPS (Auditec® version) was applied, binaurally, at 50dBSL. The findings were analyzed in a descriptive and inferential manner. Results: statistical analysis showed significance only for the comparison of the mean value of 88.10%, a result obtained in the PPS performed by Brazilians, when compared to the normative value (included) suggested by Auditec®, in which the mean was 96%. Conclusion: the findings of this study demonstrated that the values obtained in the PPS, Auditec® version, in the Brazilian population, were similar to those presented in the international literature.
Facebook
Twitterhttps://www.nist.gov/open/licensehttps://www.nist.gov/open/license
This data publication contains the mass spectrometry chemical characterization of microplastic and nanoplastic chemical analysis. The data from this study includes mass spectra of pure, mixed, and weathered microplastics and nanoplastics at high and low fragmentation, extracted ion chronograms, Kendrick mass defect plots, code, and the derived and processed data. The data analysis code (MATLAB 2022a*) used for unsupervised learning of cluster and compositional relationships is also included. The code employs principal component analysis for dimensionality reduction, learns the resulting datasets' latent dimensionality, and completes Gaussian mixture modeling and fuzzy c-means clustering. *Any mention of commercial products is for information only; it does not imply recommendation or endorsement by NIST.
Facebook
TwitterU.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
In collaboration with researchers at the University of Puerto Rico, Mayaguez, the U.S. Geological Survey acquired multimethod surface seismic imaging data at 21 sites in Puerto Rico for shear-wave velocity site characterization. These data were collected at both permanent and temporary seismograph stations throughout the island, between March 6 and March 18, 2022. Data were acquired using 72 4.5 Hz resonant frequency geophones in linear arrays. Both single-component P-wave and S-wave data were recorded, as well as Refraction Microtremor (REMI) data. Data files are in Society of Exploration Geophysicists 2 (SEG2) format, using a ‘.dat’ file extension. Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the U.S. Government.
Facebook
Twitterhttps://www.nist.gov/open/licensehttps://www.nist.gov/open/license
These four data files contain datasets from an interlaboratory comparison that characterized a polydisperse five-population bead dispersion in water. A more detailed version of this description is available in the ReadMe file (PdP-ILC_datasets_ReadMe_v1.txt), which also includes definitions of abbreviations used in the data files. Paired samples were evaluated, so the datasets are organized as pairs associated with a randomly assigned laboratory number. The datasets are organized in the files by instrument type: PTA (particle tracking analysis), RMM (resonant mass measurement), ESZ (electrical sensing zone), and OTH (other techniques not covered in the three largest groups, including holographic particle characterization, laser diffraction, flow imaging, and flow cytometry). In the OTH group, the specific instrument type for each dataset is noted. Each instrument type (PTA, RMM, ESZ, OTH) has a dedicated file. Included in the data files for each dataset are: (1) the cumulative particle number concentration (PNC, (1/mL)); (2) the concentration distribution density (CDD, (1/mL·nm)) based upon five bins centered at each particle population peak diameter; (3) the CDD in higher resolution, varied-width bins. The lower-diameter bin edge (µm) is given for (2) and (3). Additionally, the PTA, RMM, and ESZ files each contain unweighted mean cumulative particle number concentrations and concentration distribution densities calculated from all datasets reporting values. The associated standard deviations and standard errors of the mean are also given. In the OTH file, the means and standard deviations were calculated using only data from one of the sub-groups (holographic particle characterization) that had n = 3 paired datasets. Where necessary, datasets not using the common bin resolutions are noted (PTA, OTH groups). The data contained here are presented and discussed in a manuscript to be submitted to the Journal of Pharmaceutical Sciences and presented as part of that scientific record.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This file contains information about the supporting data included with the following journal manuscript:
J.C. Plumb, J.F. Lind, J.C. Tucker, R. Kelley, A.D. Spear. "Three-dimensional grain mapping of open-cell metallic foam by integrating synthetic data with experimental data from high-energy X-ray diffraction microscropy," Materials Characterization, 2018. https://doi.org/10.1016/j.matchar.2018.07.031
Please reference the above manuscript in any work that uses this data.
\Dream3D - This folder contains the pipeline (AluminumFoam_pipeline.json) that was run using DREAM.3D and all of the necessary files to produce the original instantiation of the hybrid (measured + synthetic) volume of foam reported in the article. DREAM.3D version 6.4 was used by the authors. D3DInstructions.txt is a text file with written instructions for how to run the pipeline using the given input parameters. -----Input files include: ----- BulkFoam_Voxelized.txt which is the voxelized data set used to represent the original nominal volume of foam HybridFoam_Original_Centroids.txt which contains the x,y,z coordinates and Bunge Euler angles for each of the 264 grains -----Output files include:----- HybridFoam_Original.xdmf and HybridFoam_Original.dream3d which can be used for visualization HybridFoam_Original.csv which contains DREAM.3D's calculated parameters for each grain including the best fit ellipsoid parameters
Also included is a Paraview state file for easy visualization of the results in Paraview.
\HexrdOutput - This folder contains the grains.out files for all ligaments that were analyzed using HEXRD (see Ref. [29] in manuscript). The output files give information on the x,y,z coordinates and crystal orientations for the grains found in each scan. The grains.out files are labeled according to the same convention that is used in the manuscript referenced above.
Facebook
TwitterSubsurface data analysis, reservoir modeling, and machine learning (ML) techniques have been applied to the Brady Hot Springs (BHS) geothermal field in Nevada, USA to further characterize the subsurface and assist with optimizing reservoir management. Hundreds of reservoir simulations have been conducted in TETRAD-G and CMG STARS to explore different injection and production fluid flow rates and allocations and to develop a training data set for ML. This process included simulating the historical injection and production since 1979 and prediction of future performance through 2040. ML networks were created and trained using TensorFlow based on multilayer perceptron, long short-term memory, and convolutional neural network architectures. These networks took as input selected flow rates, injection temperatures, and historical field operation data and produced estimates of future production temperatures. This approach was first successfully tested on a simplified single-fracture doublet system, followed by the application to the BHS reservoir. Using an initial BHS data set with 37 simulated scenarios, the trained and validated network predicted the production temperature for six production wells with the mean absolute percentage error of less than 8%. In a complementary analysis effort, the principal component analysis applied to 13 BHS geological parameters revealed that vertical fracture permeability shows the strongest correlation with fault density and fault intersection density. A new BHS reservoir model was developed considering the fault intersection density as proxy for permeability. This new reservoir model helps to explore underexploited zones in the reservoir. A data gathering plan to obtain additional subsurface data was developed; it includes temperature surveying for three idle injection wells at which the reservoir simulations indicate high bottom-hole temperatures. The collected data assist with calibrating the reservoir model. Data gathering activities are planned for the first quarter of 2021.
This GDR submission includes a preprint of the paper titled "Subsurface Characterization and Machine Learning Predictions at Brady Hot Springs" presented at the 46th Stanford Geothermal Workshop (SGW) on Geothermal Reservoir Engineering from February 16-18, 2021.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is the supplemental data for the manuscript titled Characterization of mixing in nanoparticle hetero-aggregates using convolutional neural networks submitted to Nano Select.
Motivation:
Detection of nanoparticles and classification of the material type in scanning transmission electron microscopy (STEM) images can be a tedious task, if it has to be done manually. Therefore, a convolutional neural network is trained to do this task for STEM-images of TiO2-WO3 nanoparticle hetero-aggregates. The present dataset contains the training data and some jupyter-notebooks that can be used after installation of the MMDetection toolbox (https://github.com/open-mmlab/mmdetection) to train the CNN. Details are provided in the manuscript submitted to Nano Select and in the comments of the jupyter-notebooks.
Authors and funding:
The present dataset was created by the authors. The work was funded by the Deutsche Forschungsgemeinschaft within the priority program SPP2289 under contract numbers RO2057/17-1 and MA3333/25-1.
Dataset description:
Four jupyter-notebooks are provided, which can be used for different tasks, according to their names. Details can be found within the comments and markdowns. These notebooks can be run after installation of MMDetection within the mmdetection folder.
particle_detection_training.ipynb: This notebook can be used for network training.
particle_detection_evaluation.ipynb: This notebook is for evaluation of a trained network with simulated test images.
particle_detection_evaluation_experiment.ipynb: This notebook is for evaluation of a trained network with experimental test images.
particle_detection_measurement_experiment.ipynb: This notebook is for application of a trained network to experimental data.
In addition, a script titled particle_detection_functions.py is provided which contains functions required by the notebooks. Details can be found within the comments.
The zip archive training_data.zip contains the training data. The subfolder HAADF contains the images (sorted as training, validation and test images), the subfolder json contains the annotation (sorted as training, validation and test images). Each file within the json folder provides for each image the following information:
aggregat_no: image id, the number of the corresponding image file
particle_position_x: list of particle position x-coordinates in nm
particle_position_y: list of particle position y-coordinates in nm
particle_position_z: list of particle position z-coordinates in nm
particle_radius: list of volume equivalent particle radii in nm
particle_type: list of material types, 1: TiO2, 2: WO3
particle_shape: list of particle shapes: 0: sphere, 1: box, 2: icosahedron
rotation: list of particle rotations in rad. Each particle is rotated twice by the listed angle (before and after deformation)
deformation: list of particle deformations. After the first rotation the particle x-coordinates of the particle’s surface mesh are scaled by the factor listed in deformation, y- and z-coordinates are scaled according to 1/sqrt(deformation).
cluster_index: list of cluster indices for each particle
initial_cluster_index: list of initial cluster indices for each particle, before primary clusters of the same material were merged
fractal_dimension: the intended fractal dimension of the aggregate
fractal_dimension_true: the realized geometric fractal dimension of the aggregate (neglecting particle densities)
fractal_dimension_weight_true: the realized fractal dimension of the aggregate (including particle densities)
fractal_prefactor: fractal prefactor
mixing_ratio_intended: the intended mixing ratio (fraction of WO3 particles)
mixing_ratio_true: the realised mixing ratio (fraction of WO3 particles)
mixing_ratio_volume: the realised mixing ratio (fraction of WO3 volume)
mixing_ratio_weight: the realised mixing ratio (fraction of WO3 weight)
particle_1_rho: density of TiO2 used for the calculations
particle_1_size_mean: mean TiO2 radius
particle_1_size_min: smallest TiO2 radius
particle_1_size_max: largest TiO2 radius
particle_1_size_std: standard deviation of TiO2 radii
particle_1_clustersize: average TiO2 cluster size
particle_1_clustersize_init: average TiO2 cluster size of primary clusters (before merging into larger clusters)
particle_1_clustersize_init_intended: intended TiO2 cluster size of primary clusters
particle_2_rho: density of WO3 used for the calculations
particle_2_size_mean: mean WO3 radius
particle_2_size_min: smallest WO3 radius
particle_2_size_max: largest WO3 radius
particle_2_size_std: standard deviation of WO3 radii
particle_2_clustersize: average WO3 cluster size
particle_2_clustersize_init: average WO3 cluster size of primary clusters (before merging into larger clusters)
particle_2_clustersize_init_intended: intended WO3 cluster size of primary clusters
number_of_primary_particles: number of particles within the aggregate
gyration_radius_geometric: gyration radius of the aggregate (neglecting particle densities)
gyration_radius_weighted: gyration radius of the aggregate (including particle densities)
mean_coordination: mean total coordination number (particle contacts)
mean_coordination_heterogen: mean heterogeneous coordination number (contacts with particles of the different material)
mean_coordination_homogen: mean homogeneous coordination number (contacts with particles of the same material)
radius_equiv: list of area equivalent particle radii (in projection)
k_proj: projection direction of the aggregate: 0: z-direction (axis = 2), 1: x-direction (axis = 1), 2: y-direction (axis = 0)
polygons: list of polygons that surround the particle (COCO annotation)
bboxes: list of particle bounding boxes
aggregate_size: projected area of the aggregate translated into the radius of a circle in nm
n_pix: number of pixel per image in horizontal and vertical direction (squared images)
pixel_size: pixel size in nm
image_size: image size in nm
add_poisson_noise: 1 if poisson noise was added, 0 otherwise
frame_time: simulated frame time (required for poisson noise)
dwell_time: dwell time per pixel (required for poisson noise)
beam_current: beam current (required for poisson noise)
electrons_per_pixel: number of electrons per pixel
dose: electron dose in electrons per Å2
add_scan_noise: 1 if scan noise was added, 0 otherwise
beam misposition: parameter that describes how far the beam can be misplaced in pm (required for scan noise)
scan_noise: parameter that describes how far the beam can be misplaced in pixel (required for scan noise)
add_focus_dependence: 1 if a focus effect is included, 0 otherwise
data_format: data format of the images, e.g. uint8
There are 24000 training images, 5500 validation images, 5500 test images, and their corresponding annotations. Aggregates and STEM images were obtained with the algorithm explained in the main work. The important data for CNN training is extracted from the files of individual aggregates and concluded in the subfolder COCO. For training, validation and test data there is a file annotation_COCO.json that includes all information required for the CNN training.
The zip archive experiment_test_data.zip includes manually annotated experimental images. All experimental images were filtered as explained in the main work. The subfolder HAADF includes thirteen images. The subfolder json includes an annotation file for each image in COCO format. A single file concluding all annotations is stored in json/COCO/annotation_COCO.json.
The zip archive experiment_measurement.zip includes the experimental images investigated in the manuscript. It contains four subfolders corresponding to the four investigated samples. All experimental images were filtered as explained in the manuscript.
The zip archive particle_detection.zip includes the network, that was trained, evaluated and used for the investigation in the manuscript. The network weights are stored in the file particle_detection/logs/fit/20230622-222721/iter_60000.pth. These weights can be loaded with the jupyter-notebook files. Furthermore, a configuration file, which is required by the notebooks, is stored as particle_detection/logs/fit/20230622-222721/config_file.py.
There is no confidential data in this dataset. It is neither offensive, nor insulting or threatening.
The dataset was generated to discriminate between TiO2 and WO3 nanoparticles in STEM-images. It might be possible that it can discriminate between different materials if the STEM contrast is similar to the contrast of TiO2 and WO3 but there is no guarantee.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Abstract: Problem solving is an important topic in mathematics because it allows students to apply what they learn to situations of everyday life. This is why this research aims to characterize how metacognitive regulation favors the processes of solving problems of measures of central tendency. The qualitative approach has a descriptive character, and the methodological design is a case study in which six 8th-grade students are analyzed. The results show how planning, monitoring and evaluation allow students to be more organized, identify and correct errors, know why they perform certain actions, and have control over their own learning. In addition, it is shown how the ontological obstacles demonstrated by learners in the resolution of problems are related to their conceptions, which are influenced by the use of everyday language.
Facebook
TwitterCIMMYT has periodically announced CIMMYT Maize Lines (CMLs). CMLs are carefully selected inbred lines with good general combining ability and a significant number of value-adding traits such as drought tolerance, N use efficiency, acid soil tolerance, resistance to diseases, insects and parasitic weeds. In many instances, they are parental lines of hybrids which have proven successful in one or several maize mega-environments. For each line, there is information about pedigree, heterotic group, disease resistance, agronomic characteristics, adaptation group, and other attributes in the reference CML information catalog file. These CML lines are available at the CIMMYT Maize Germplasm Bank: http://www.cimmyt.org/seed-request/#maize. For more information, please contact: CIMMYT-DMU@cgiar.org
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Chile Average Years in School data was reported at 11.170 Year in 2017. This records an increase from the previous number of 10.989 Year for 2015. Chile Average Years in School data is updated yearly, averaging 10.142 Year from Dec 1990 (Median) to 2017, with 13 observations. The data reached an all-time high of 11.170 Year in 2017 and a record low of 9.032 Year in 1990. Chile Average Years in School data remains active status in CEIC and is reported by Ministry of Social Development. The data is categorized under Global Database’s Chile – Table CL.H024: National Socio-Economic Characterization Survey: Education.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Bicontinuous Pickering emulsions (bijels) are a physically interesting class of soft materials with many potential applications including catalysis, microfluidics and tissue engineering. They are created by arresting the spinodal decomposition of a partially-miscible liquid with a (jammed) layer of interfacial colloids. Porosity L (average interfacial separation) of the bijel is controlled by varying the radius (r) and volume fraction (f) of the colloids (L ~ r/f). However, to optimize the bijel structure with respect to other parameters, e.g. quench rate, characterizing by L alone is insufficient. Hence, we have used confocal microscopy and X-ray CT to characterize a range of bijels in terms of local and area-averaged interfacial curvatures; we further demonstrate that bijels are bicontinuous using an image-analysis technique known as `region growing'. In addition, the curvatures of bijels have been monitored as a function of time, which has revealed an intriguing evolution up to 60 minutes after bijel formation, contrary to previous understanding.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Experimental results for the extensive characterization of the low-cost module TES1-12730 by Thermonamic Electronics Corporation, obtained using the Unified Method for Thermo-Electric Module characterization (see references). Data are processed and visualized using MATLAB version 9.7.0.1319299 (R2019b) Update 5.
Tests are obtained for 14 operating temperature values with step 3 °C between 10 °C and 49 °C.
Datasets include:
Softwares include:
Facebook
TwitterThis data set contains Vegetation and Site Characterization data from the 1998-99 ATLAS Grid Transect, Northern Alaska. This data set comprises the text and data presented in the data report "A western Alaskan transect to examine interactions of climate, substrate, vegetation, and spectral reflectance: ATLAS grids and transects, 1998-1999." The report is a compilation of data from two projects conducted on the North Slope of Alaska during the summers of 1998 and 1999. The first project involves environmental, climate, soil, vegetation, and remote-sensing data collected from 8 ATLAS (Arctic Transitions in the Land-Atmosphere System) grids established along a North-South transect from Barrow to Ivotuk, Alaska. The original purposes of the study were (1) to characterize the major zonal vegetation types found along the North Slope climate gradient, (2) to quantify differences between acidic and non-acidic tundra along the same gradient, and (3) to investigate relationships between plant biomass, Leaf Area Index (LAI), and Normalized Difference Vegetation Index (NDVI). This part also includes a brief analysis of interactions between plant functional type composition, LAI, NDVI, and summer temperature. This analysis is limited to moist acidic tundra (MAT) and moist non-acidic tundra (MNT) comparisons using data from six of the eight grids that best represent acidic and non-acidic mesic vegetation. The second project is an accuracy assessment of a Landsat MSS-derived landcover map of northern Alaska, which involved creating several large transects over northwest Alaska. Included here is a table of LAI measurements from eight random points along these transects, as well as the accompanying releve and site factor data sheets. No analysis of these data is presented.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Chile Average Years in School: Metropolitana data was reported at 11.819 Year in 2017. This records an increase from the previous number of 11.604 Year for 2015. Chile Average Years in School: Metropolitana data is updated yearly, averaging 11.335 Year from Dec 2006 (Median) to 2017, with 6 observations. The data reached an all-time high of 11.819 Year in 2017 and a record low of 10.831 Year in 2006. Chile Average Years in School: Metropolitana data remains active status in CEIC and is reported by Ministry of Social Development. The data is categorized under Global Database’s Chile – Table CL.H024: National Socio-Economic Characterization Survey: Education.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
a, Frequently, variation in data from across the sciences is characterized with the arithmetic mean and the standard deviation SD. Often, it is evident from the numbers that the data have to be skewed. This becomes clear if the lower end of the 95% interval of normal variation, - 2 SD, extends below zero, thus failing the “95% range check”, as is the case for all cited examples. Values in bold contradict the positive nature of the data. b, More often, variation is described with the standard error of the mean, SEM (SD = SEM · √n, with n = sample size). Such distributions are often even more skewed, and their original characterization as being symmetric is even more misleading. Original values are given in italics (°estimated from graphs). Most often, each reference cited contains several examples, in addition to the case(s) considered here. Table 2 collects further examples.