34 datasets found
  1. Data from: Additional Examples

    • springernature.figshare.com
    zip
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Susanna-Assunta Sansone; Philippe Rocca-Serra; Pawel Krajewski; Hanna Ćwiek-Kupczyńska; Alejandra Gonzalez-Beltran; Emilie J. Millet; Katarzyna Filipiak; Agnieszka Ławrynowicz; Augustyn Markiewicz; Fred van Eeuwijk (2023). Additional Examples [Dataset]. http://doi.org/10.6084/m9.figshare.11819274.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Susanna-Assunta Sansone; Philippe Rocca-Serra; Pawel Krajewski; Hanna Ćwiek-Kupczyńska; Alejandra Gonzalez-Beltran; Emilie J. Millet; Katarzyna Filipiak; Agnieszka Ławrynowicz; Augustyn Markiewicz; Fred van Eeuwijk
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    The collection contains two sets of examples: 1. examplary RDF datasets, demontrating generation of semantic models for the analyses described in Results/Exemplary analyses section with 'SemLMM' R package, 2. exemplary SPARQL queries, implementing the use cases discussed in Results/Exemplary queries section.

  2. f

    Travel time to cities and ports in the year 2015

    • figshare.com
    tiff
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Andy Nelson (2023). Travel time to cities and ports in the year 2015 [Dataset]. http://doi.org/10.6084/m9.figshare.7638134.v4
    Explore at:
    tiffAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    figshare
    Authors
    Andy Nelson
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The dataset and the validation are fully described in a Nature Scientific Data Descriptor https://www.nature.com/articles/s41597-019-0265-5

    If you want to use this dataset in an interactive environment, then use this link https://mybinder.org/v2/gh/GeographerAtLarge/TravelTime/HEAD

    The following text is a summary of the information in the above Data Descriptor.

    The dataset is a suite of global travel-time accessibility indicators for the year 2015, at approximately one-kilometre spatial resolution for the entire globe. The indicators show an estimated (and validated), land-based travel time to the nearest city and nearest port for a range of city and port sizes.

    The datasets are in GeoTIFF format and are suitable for use in Geographic Information Systems and statistical packages for mapping access to cities and ports and for spatial and statistical analysis of the inequalities in access by different segments of the population.

    These maps represent a unique global representation of physical access to essential services offered by cities and ports.

    The datasets travel_time_to_cities_x.tif (where x has values from 1 to 12) The value of each pixel is the estimated travel time in minutes to the nearest urban area in 2015. There are 12 data layers based on different sets of urban areas, defined by their population in year 2015 (see PDF report).

    travel_time_to_ports_x (x ranges from 1 to 5)

    The value of each pixel is the estimated travel time to the nearest port in 2015. There are 5 data layers based on different port sizes.

    Format Raster Dataset, GeoTIFF, LZW compressed Unit Minutes

    Data type Byte (16 bit Unsigned Integer)

    No data value 65535

    Flags None

    Spatial resolution 30 arc seconds

    Spatial extent

    Upper left -180, 85

    Lower left -180, -60 Upper right 180, 85 Lower right 180, -60 Spatial Reference System (SRS) EPSG:4326 - WGS84 - Geographic Coordinate System (lat/long)

    Temporal resolution 2015

    Temporal extent Updates may follow for future years, but these are dependent on the availability of updated inputs on travel times and city locations and populations.

    Methodology Travel time to the nearest city or port was estimated using an accumulated cost function (accCost) in the gdistance R package (van Etten, 2018). This function requires two input datasets: (i) a set of locations to estimate travel time to and (ii) a transition matrix that represents the cost or time to travel across a surface.

    The set of locations were based on populated urban areas in the 2016 version of the Joint Research Centre’s Global Human Settlement Layers (GHSL) datasets (Pesaresi and Freire, 2016) that represent low density (LDC) urban clusters and high density (HDC) urban areas (https://ghsl.jrc.ec.europa.eu/datasets.php). These urban areas were represented by points, spaced at 1km distance around the perimeter of each urban area.

    Marine ports were extracted from the 26th edition of the World Port Index (NGA, 2017) which contains the location and physical characteristics of approximately 3,700 major ports and terminals. Ports are represented as single points

    The transition matrix was based on the friction surface (https://map.ox.ac.uk/research-project/accessibility_to_cities) from the 2015 global accessibility map (Weiss et al, 2018).

    Code The R code used to generate the 12 travel time maps is included in the zip file that can be downloaded with these data layers. The processing zones are also available.

    Validation The underlying friction surface was validated by comparing travel times between 47,893 pairs of locations against journey times from a Google API. Our estimated journey times were generally shorter than those from the Google API. Across the tiles, the median journey time from our estimates was 88 minutes within an interquartile range of 48 to 143 minutes while the median journey time estimated by the Google API was 106 minutes within an interquartile range of 61 to 167 minutes. Across all tiles, the differences were skewed to the left and our travel time estimates were shorter than those reported by the Google API in 72% of the tiles. The median difference was −13.7 minutes within an interquartile range of −35.5 to 2.0 minutes while the absolute difference was 30 minutes or less for 60% of the tiles and 60 minutes or less for 80% of the tiles. The median percentage difference was −16.9% within an interquartile range of −30.6% to 2.7% while the absolute percentage difference was 20% or less in 43% of the tiles and 40% or less in 80% of the tiles.

    This process and results are included in the validation zip file.

    Usage Notes The accessibility layers can be visualised and analysed in many Geographic Information Systems or remote sensing software such as QGIS, GRASS, ENVI, ERDAS or ArcMap, and also by statistical and modelling packages such as R or MATLAB. They can also be used in cloud-based tools for geospatial analysis such as Google Earth Engine.

    The nine layers represent travel times to human settlements of different population ranges. Two or more layers can be combined into one layer by recording the minimum pixel value across the layers. For example, a map of travel time to the nearest settlement of 5,000 to 50,000 people could be generated by taking the minimum of the three layers that represent the travel time to settlements with populations between 5,000 and 10,000, 10,000 and 20,000 and, 20,000 and 50,000 people.

    The accessibility layers also permit user-defined hierarchies that go beyond computing the minimum pixel value across layers. A user-defined complete hierarchy can be generated when the union of all categories adds up to the global population, and the intersection of any two categories is empty. Everything else is up to the user in terms of logical consistency with the problem at hand.

    The accessibility layers are relative measures of the ease of access from a given location to the nearest target. While the validation demonstrates that they do correspond to typical journey times, they cannot be taken to represent actual travel times. Errors in the friction surface will be accumulated as part of the accumulative cost function and it is likely that locations that are further away from targets will have greater a divergence from a plausible travel time than those that are closer to the targets. Care should be taken when referring to travel time to the larger cities when the locations of interest are extremely remote, although they will still be plausible representations of relative accessibility. Furthermore, a key assumption of the model is that all journeys will use the fastest mode of transport and take the shortest path.

  3. n

    Genome-wide SNP datasets for the non-native pink salmon in Norway

    • data.niaid.nih.gov
    • dataone.org
    • +1more
    zip
    Updated Feb 5, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Simo Njabulo Maduna; Paul Eric Aspholm; Ane-Sofie Bednarczyk Hansen; Cornelya Klütsch; Snorre Hagen (2024). Genome-wide SNP datasets for the non-native pink salmon in Norway [Dataset]. http://doi.org/10.5061/dryad.zw3r228f2
    Explore at:
    zipAvailable download formats
    Dataset updated
    Feb 5, 2024
    Dataset provided by
    Norwegian Institute of Bioeconomy Research
    Authors
    Simo Njabulo Maduna; Paul Eric Aspholm; Ane-Sofie Bednarczyk Hansen; Cornelya Klütsch; Snorre Hagen
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Area covered
    Norway
    Description

    Effective management of non-indigenous species requires knowledge of their dispersal factors and founder events. We aim to identify the main environmental drivers favouring dispersal events along the invasion gradient and to characterize the spatial patterns of genetic diversity in feral populations of the non-native pink salmon within its epicentre of invasion in Norway. We first conducted SDM using four modelling techniques with varying levels of complexity, which encompassed both regression-based and tree-based machine-learning algorithms, using climatic data from the present to 2050. Then we used the triple-enzyme restriction-site associated DNA sequencing (3RADseq) approach to genotype over 30,000 high-quality single-nucleotide polymorphisms to elucidate patterns of genetic diversity and gene flow within the pink salmon putative invasion hotspot. We discovered temperature- and precipitation-related variables drove pink salmon distributional shifts across its non-native ranges, and that climate-induced favourable areas will remain stable for the next 30 years. In addition, all SDMs identified north-eastern Norway as the epicentre of the pink salmon invasion, and genomic data revealed that there was minimal variation in genetic diversity across the sampled populations at a genome-wide level in this region. While, upon utilizing a specific group of ‘diagnostic’ SNPs, we observed a significant degree of genetic differentiation, ranging from moderate to substantial, and detected four hierarchical genetic clusters concordant with geography. Our findings suggest that fluctuations of climate extreme events associated with ongoing climate change will likely maintain environmental favourability for the pink salmon outside its ‘native’/introduced ranges. Local invaded rivers are themselves a potential source population of invaders in the ongoing secondary spread of pink salmon in Northern Norway. Our study shows that SDMs and genomic data can reveal species distribution determinants and provide indicators to aid in post-control measures and potential inferences of their success. Methods 3RAD library preparation and sequencing: We prepared RADseq libraries using the Adapterama III library preparation protocol of Bayona-Vásquez et al., (2019; their Supplemental File SI). For each sample, ~40-100 ng of genomic DNA were digested for 1 h at 37 °C in a solution with 1.5 µl of 10x Cutsmart® buffer, 0.25 µl (NEB®) of Read 1 enzyme (MspI) at 20 U/µl, 0.25 µl of Read 2 enzyme (BamHI-HF) at 20 U/µl, 0.25 µl of Read 1 adapter dimer-cutting enzyme (ClaI) at 20 U/ µl, 1 µl of i5Tru adapter at 2.5 µM, 1 µl of i7Tru adapter at 2.5 µM and 0.75 µl of dH2O. After digestion/ligation, samples were pooled and cleaned with 1.2x Sera-Mag SpeedBeads (Fisher Scientiifc™) in a 1.2:1 (SpeedBeads:DNA) ratio, and we eluted cleaned DNA in 60 µL of TLE. An enrichment PCR of each sample was carried with 10 µl of 5x Kapa Long Range Buffer (Kapa Biosystems, Inc.), 0.25 µl of KAPA LongRange DNA Polymerase at 5 U/µl, 1.5 µl of dNTPs mix (10 mM each dNTP), 3.5 µl of MgCl2 at 25 mM, 2.5 µl of iTru5 primer at 5 µM, 2.5 µl of iTru7 primer at 5 µM and 5 µl of pooled DNA. The i5 and i7 adapters ligated to each sample using a unique combination (2 i5 X 1 i7 indexes). The temperature conditions for PCR enrichment were 94 °C for 2 min of initial denaturation, followed by 10 cycles of 94 °C for 20 sec, 57 °C for 15 sec and 72° for 30 sec, and a final cycle of 72 °C for 5 min. The enriched samples were each cleaned and quantified with a Quantus™ Fluorometer. Cleaned, indexed and quantified library pools were pooled to equimolar concentrations and were sent to the Norwegian Sequencing Centre (NSC) for quality control and subsequent final size selection using a one-sided bead clean-up (0.7:1 ratio) to capture 550 bp +/- 10% fragments, and the final paired-end (PE) 150 bp sequencing on one lane each of the Illumina HiSeq 4000 platform. Data filtering: We filtered genotype data and characterized singleton SNP loci and multi-site variants (MSVs) using filtering procedures and custom scripts available in scripts available in STACKS Workflow v.2 (https://github.com/enormandeau/stacks_workflow). First, we filtered the ‘raw’ VCF file keeping only SNPs that (i) showed a minimum depth of four (-m 4), (ii) were called in at least 80% of the samples in each site (-p 80) and (iii) and for which at least two samples had the rare allele i.e., Minor Allele Sample (MAS; -S 2), using the python script 05_filter_vcf_fast.py. Second, we exclude those samples with more than 20% missing genotypes from the data set. Third, we calculated pairwise relatedness between samples with the Yang et al., (2010) algorithm and individual-level heterozygosity in vcftools v.0.1.17 (Danecek et al., 2010). Additionally, we calculated pairwise kinship coefficients among individuals using the KING-robust method (Manichaikul et al., 2010) with the R package SNPRelate v.1.28.0 (Zheng et al., 2012). Then, we estimated genotyping error rates between technical replicates using the software tiger v1.0 (Bresadola et al., 2020). Finally, we removed one of the pair of closely related individuals exhibiting the higher level of missing data along with samples that showed extremely low heterozygosity (< -0.2) from graphical observation of individual-level heterozygosity per sampling population. Fourth, we conducted a secondary dataset filtering step using 05_filter_vcf_fast.py, keeping the above-mentioned data filtering cut-off parameters (i.e., -m = 4; -p = 80; -S = 3). Fifth, we calculated a suit of four summary statistics to discriminate high-confidence SNPs (singleton SNPs) from SNPs exhibiting a duplication pattern (duplicated SNPs; MSVs): (i) median of allele ratio in heterozygotes (MedRatio), (ii) proportion of heterozygotes (PropHet), (iii) proportion of rare homozygotes (PropHomRare) and (iv) inbreeding coefficient (FIS). We calculated each parameter from the filtered VCF file using the python script 08_extract_snp_duplication_info.py. The four parameters calculated for each locus were plotted against each other to visualize their distribution across all loci using the R script 09_classify_snps.R. Based on the methodology of McKinney et al. (2017) and by plotting different combinations of each parameter, we graphically fixed cut-offs for each parameter. Sixth, we then used the python script 10_split_vcf_in_categories.py for classify SNPs to generate two separate datasets: the “SNP dataset,” based on SNP singletons only, and the “MSV dataset,” based on duplicated SNPs only, which we excluded from further analyses. Seventh, we postfiltered the SNP dataset by keeping all unlinked SNPs within each 3RAD locus using the 11_extract_unlinked_snps.py script with a minimum difference of 0.5 (-diff_threshold 0.5) and a maximum distance 1,000 bp (-max_distance 1,000). Then, for the SNP dataset, we filtered out SNPs that were located in unplaced scaffolds i.e., contigs that were not part of the 26 chromosomes of the pink salmon genome.

  4. r

    TRI BasicPlus Data Files (2020): Additional Info. (Sec 8.11)

    • redivis.com
    Updated Oct 2, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Environmental Impact Data Collaborative (2022). TRI BasicPlus Data Files (2020): Additional Info. (Sec 8.11) [Dataset]. https://redivis.com/datasets/pk9c-2q8g8bw9v/usage
    Explore at:
    Dataset updated
    Oct 2, 2022
    Dataset authored and provided by
    Environmental Impact Data Collaborative
    Time period covered
    2020
    Description

    The “Type 5” file contains information from Section 8.11 of the TRI Reporting Form R. Section 8.11 is an optional text section in which facilities may choose to provide more detail about activities taken to reduce releases of the TRI chemical being reported. Collection of Section 8.11 comment data began in reporting year 2005. Only Form R submissions that have this optional text are included in File Type 5. (Note: EPA has received ~10 comments for Section 8.11 from facilities that submitted Form R revisions for years prior to 2005.)

  5. n

    Respiration_chambers/raw_log_files and combined datasets of biomass and...

    • cmr.earthdata.nasa.gov
    • data.aad.gov.au
    • +2more
    Updated Dec 18, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2018). Respiration_chambers/raw_log_files and combined datasets of biomass and chamber data, and physical parameters [Dataset]. http://doi.org/10.26179/5c1827d5d6711
    Explore at:
    Dataset updated
    Dec 18, 2018
    Time period covered
    Jan 27, 2015 - Feb 23, 2015
    Area covered
    Description

    General overview The following datasets are described by this metadata record, and are available for download from the provided URL.

    • Raw log files, physical parameters raw log files
    • Raw excel files, respiration/PAM chamber raw excel spreadsheets
    • Processed and cleaned excel files, respiration chamber biomass data
    • Raw rapid light curve excel files (this is duplicated from Raw log files), combined dataset pH, temperature, oxygen, salinity, velocity for experiment
    • Associated R script file for pump cycles of respirations chambers

    ####

    Physical parameters raw log files

    Raw log files 1) DATE= 2) Time= UTC+11 3) PROG=Automated program to control sensors and collect data 4) BAT=Amount of battery remaining 5) STEP=check aquation manual 6) SPIES=check aquation manual 7) PAR=Photoactive radiation 8) Levels=check aquation manual 9) Pumps= program for pumps 10) WQM=check aquation manual

    ####

    Respiration/PAM chamber raw excel spreadsheets

    Abbreviations in headers of datasets Note: Two data sets are provided in different formats. Raw and cleaned (adj). These are the same data with the PAR column moved over to PAR.all for analysis. All headers are the same. The cleaned (adj) dataframe will work with the R syntax below, alternative add code to do cleaning in R.

    Date: ISO 1986 - Check Time:UTC+11 unless otherwise stated DATETIME: UTC+11 unless otherwise stated ID (of instrument in respiration chambers) ID43=Pulse amplitude fluoresence measurement of control ID44=Pulse amplitude fluoresence measurement of acidified chamber ID=1 Dissolved oxygen ID=2 Dissolved oxygen ID3= PAR ID4= PAR PAR=Photo active radiation umols F0=minimal florescence from PAM Fm=Maximum fluorescence from PAM Yield=(F0 – Fm)/Fm rChl=an estimate of chlorophyll (Note this is uncalibrated and is an estimate only) Temp=Temperature degrees C PAR=Photo active radiation PAR2= Photo active radiation2 DO=Dissolved oxygen %Sat= Saturation of dissolved oxygen Notes=This is the program of the underwater submersible logger with the following abreviations: Notes-1) PAM= Notes-2) PAM=Gain level set (see aquation manual for more detail) Notes-3) Acclimatisation= Program of slowly introducing treatment water into chamber Notes-4) Shutter start up 2 sensors+sample…= Shutter PAMs automatic set up procedure (see aquation manual) Notes-5) Yield step 2=PAM yield measurement and calculation of control Notes-6) Yield step 5= PAM yield measurement and calculation of acidified Notes-7) Abatus respiration DO and PAR step 1= Program to measure dissolved oxygen and PAR (see aquation manual). Steps 1-4 are different stages of this program including pump cycles, DO and PAR measurements.

    8) Rapid light curve data Pre LC: A yield measurement prior to the following measurement After 10.0 sec at 0.5% to 8%: Level of each of the 8 steps of the rapid light curve Odessey PAR (only in some deployments): An extra measure of PAR (umols) using an Odessey data logger Dataflow PAR: An extra measure of PAR (umols) using a Dataflow sensor. PAM PAR: This is copied from the PAR or PAR2 column PAR all: This is the complete PAR file and should be used Deployment: Identifying which deployment the data came from

    ####

    Respiration chamber biomass data

    The data is chlorophyll a biomass from cores from the respiration chambers. The headers are: Depth (mm) Treat (Acidified or control) Chl a (pigment and indicator of biomass) Core (5 cores were collected from each chamber, three were analysed for chl a), these are psudoreplicates/subsamples from the chambers and should not be treated as replicates.

    ####

    Associated R script file for pump cycles of respirations chambers

    Associated respiration chamber data to determine the times when respiration chamber pumps delivered treatment water to chambers. Determined from Aquation log files (see associated files). Use the chamber cut times to determine net production rates. Note: Users need to avoid the times when the respiration chambers are delivering water as this will give incorrect results. The headers that get used in the attached/associated R file are start regression and end regression. The remaining headers are not used unless called for in the associated R script. The last columns of these datasets (intercept, ElapsedTimeMincoef) are determined from the linear regressions described below.

    To determine the rate of change of net production, coefficients of the regression of oxygen consumption in discrete 180 minute data blocks were determined. R squared values for fitted regressions of these coefficients were consistently high (greater than 0.9). We make two assumptions with calculation of net production rates: the first is that heterotrophic community members do not change their metabolism under OA; and the second is that the heterotrophic communities are similar between treatments.

    ####

    Combined dataset pH, temperature, oxygen, salinity, velocity for experiment

    This data is rapid light curve data generated from a Shutter PAM fluorimeter. There are eight steps in each rapid light curve. Note: The software component of the Shutter PAM fluorimeter for sensor 44 appeared to be damaged and would not cycle through the PAR cycles. Therefore the rapid light curves and recovery curves should only be used for the control chambers (sensor ID43).

    The headers are PAR: Photoactive radiation relETR: F0/Fm x PAR Notes: Stage/step of light curve Treatment: Acidified or control

    The associated light treatments in each stage. Each actinic light intensity is held for 10 seconds, then a saturating pulse is taken (see PAM methods).

    After 10.0 sec at 0.5% = 1 umols PAR After 10.0 sec at 0.7% = 1 umols PAR After 10.0 sec at 1.1% = 0.96 umols PAR After 10.0 sec at 1.6% = 4.32 umols PAR After 10.0 sec at 2.4% = 4.32 umols PAR After 10.0 sec at 3.6% = 8.31 umols PAR After 10.0 sec at 5.3% =15.78 umols PAR After 10.0 sec at 8.0% = 25.75 umols PAR

    This dataset appears to be missing data, note D5 rows potentially not useable information

    See the word document in the download file for more information.

  6. n

    Data from: GOES-R PLT Cloud Radar System (CRS)

    • earthdata.nasa.gov
    • s.cnmilf.com
    • +3more
    Updated May 13, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    GHRC_DAAC (2019). GOES-R PLT Cloud Radar System (CRS) [Dataset]. http://doi.org/10.5067/GOESRPLT/CRS/DATA101
    Explore at:
    Dataset updated
    May 13, 2019
    Dataset authored and provided by
    GHRC_DAAC
    Description

    The GOES-R PLT Field Campaign Cloud Radar System (CRS) dataset provides high-resolution profiles of reflectivity and Doppler velocity at aircraft nadir along the flight track. The CRS was flown aboard a NASA ER-2 high-altitude aircraft during the GOES-R Post Launch Test (PLT) field campaign. The GOES-R PLT field campaign took place from March 21 to May 17, 2017 in support of post-launch product validation of the Advanced Baseline Image (ABI) and the Geostationary Lightning Mapper (GLM) aboard the GOES-R, now GOES-16, satellite. The CRS data files are available in netCDF-3 format with browse imagery available in PNG format.

  7. VOYAGER 2 SATURN POSITION RESAMPLED DATA 48.0 SECONDS

    • catalog.data.gov
    • catalog-dev.data.gov
    • +1more
    Updated Apr 9, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Aeronautics and Space Administration (2025). VOYAGER 2 SATURN POSITION RESAMPLED DATA 48.0 SECONDS [Dataset]. https://catalog.data.gov/dataset/voyager-2-saturn-position-resampled-data-48-0-seconds-7eee6
    Explore at:
    Dataset updated
    Apr 9, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    This data set includes Voyager 2 Saturn encounter position data that have been generated at a 48.0 second sample rate using the NAIF SPICE kernals. The data set is composed of 4 columns: 1) ctime - this column contains the data acquisition time. The time is always output in the ISO standard spacecraft event time format (yyyy-mm-dd-Thh:mm:ss.sss) but is stored internally in Cline time which is measured in seconds after 00:00:00.000 Jan 01, 1966, 2) r - this column contains the radial distance from Saturn in Rs = 60330 km, 3) longitude - this column contains the east longitude of the spacecraft in degrees, 4) latitude - this column contains the latitude of the spacecraft in degrees. Position data is given in Minus Saturn Longitude System (kronographic) coordinates.

  8. g

    Section R Housing Quality and Household Assets

    • gimi9.com
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Section R Housing Quality and Household Assets [Dataset]. https://gimi9.com/dataset/data-gov_section-r-housing-quality-and-household-assets-7edff
    Explore at:
    License

    Attribution-NoDerivs 4.0 (CC BY-ND 4.0)https://creativecommons.org/licenses/by-nd/4.0/
    License information was derived automatically

    Description

    The survey interviewed 254 retailer shops in 10 sub-cities of Addis Ababa. 30 supermarkets, 20 mini-markets, 100 regular shops, 80 dairy shops and 24 open market shops selling dairy products were interviewed. Details of the sampling strategy is found in the attachment. The survey collected information on the characteristics of the shop, details of dairy products sold, prices and quality. Policy makers, research, and other stakeholders can use this data to analyses dairy value chain in Ethiopia and dairy retailing practices in Ethiopia. This data set was collected through research of the project “Improving the evidence and policies for better performing livestock systems in Ethiopia” lead by the International Food Policy Research Institution as part of the Feed the Future Innovation Lab for Livestock Systems.

  9. a

    Elevation Points - Section R

    • hub.arcgis.com
    • geohub.brampton.ca
    • +1more
    Updated Dec 3, 2016
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    City of Brampton (2016). Elevation Points - Section R [Dataset]. https://hub.arcgis.com/maps/brampton::elevation-points-section-r
    Explore at:
    Dataset updated
    Dec 3, 2016
    Dataset authored and provided by
    City of Brampton
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Description

    Elevation Points - Section R, last updated in 2024. Part of A through R, divided by concession blocks across the municipal extents. Often used in 3D modelling. UTM Co-ordinates, with height Z-values in Metres.

  10. e

    Topographic Section 1:25,000 of the third military mapping - 5262-004

    • data.europa.eu
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Topographic Section 1:25,000 of the third military mapping - 5262-004 [Dataset]. https://data.europa.eu/data/datasets/cz-cuzk-topo75-r-5262-004
    Explore at:
    Description

    Colour raster copies of different archived issues, so called topographic sections (toposections) at scale 1:25,000 originate in the Third Austrian Military Mapping. The maps were published between 1872 and 1953 in Austro-Hungarian Empire and later in Czechoslovakia and other successor states. The territorial extent of the file significantly exceeds boundaries of today's Czech Republic.

  11. f

    CellChat object of mouse brain 10X Visium dataset

    • figshare.com
    application/gzip
    Updated Nov 7, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Suoqin Jin (2023). CellChat object of mouse brain 10X Visium dataset [Dataset]. http://doi.org/10.6084/m9.figshare.24516436.v1
    Explore at:
    application/gzipAvailable download formats
    Dataset updated
    Nov 7, 2023
    Dataset provided by
    figshare
    Authors
    Suoqin Jin
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    CellChat object after running cell-cell communication analysis of the mouse brain 10X Visium dataset using CellChat v2. We download this dataset from https://www.10xgenomics.com/resources/datasets/mouse-brain-serial-section-1-sagittal-anterior-1-standard-1-0-0. Biological annotations of spots (i.e., cell group information) are predicted using Seurat R package.

  12. s

    Data from: Dataset of the study "Thirty seconds sit-to-stand test as an...

    • portalcientifico.sergas.gal
    Updated 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Díaz-Balboa, Estíbaliz; González-Salvado, Violeta; Rodríguez-Romero, Beatriz; Martínez-Monzonís, Amparo; Pedreira-Pérez, Milagros; Cuesta-Vargas, Antonio I.; López-López, Rafael; González-Juanatey, José R.; Peña-Gil, Carlos; Díaz-Balboa, Estíbaliz; González-Salvado, Violeta; Rodríguez-Romero, Beatriz; Martínez-Monzonís, Amparo; Pedreira-Pérez, Milagros; Cuesta-Vargas, Antonio I.; López-López, Rafael; González-Juanatey, José R.; Peña-Gil, Carlos (2022). Dataset of the study "Thirty seconds sit-to-stand test as an alternative for estimating peak oxygen uptake and six-minutes walking distance in women with breast cancer: a cross-sectional study" [Dataset]. https://portalcientifico.sergas.gal/documentos/668fc449b9e7c03b01bd8bfd?lang=es
    Explore at:
    Dataset updated
    2022
    Authors
    Díaz-Balboa, Estíbaliz; González-Salvado, Violeta; Rodríguez-Romero, Beatriz; Martínez-Monzonís, Amparo; Pedreira-Pérez, Milagros; Cuesta-Vargas, Antonio I.; López-López, Rafael; González-Juanatey, José R.; Peña-Gil, Carlos; Díaz-Balboa, Estíbaliz; González-Salvado, Violeta; Rodríguez-Romero, Beatriz; Martínez-Monzonís, Amparo; Pedreira-Pérez, Milagros; Cuesta-Vargas, Antonio I.; López-López, Rafael; González-Juanatey, José R.; Peña-Gil, Carlos
    Description

    Data was collected to study the usefulness of the thirty seconds sit-to-stand test as an alternative for estimating peak oxygen uptake and six-minutes walking distance in women with breast cancer, which is a cross-sectional study derived from the ONCORE project (Randomized controlled trial on comprehensive exercise-based cardiac rehabilitation program for the prevention of anthracyclines and/or anti-HER2 antibodies-induced cardiotoxicity in breast cancer), ClinicalTrials.gov Identifier: NCT03964142 DATASET FILE (xlsx) includes 4 sheets:
    - Dataset_variables: all variables collected pre-post intervention
    - Descriptive data (baseline): variables used for the descriptive analysis before the intervention (baseline)
    - Data_CPET-30STS(pre-post): pooled data from CPET-30STS pre-post intervention
    - Data_6MWD-30STS(pre-post): pooled data from 6MWD-30STS pre-post intervention

  13. mouse brain 10X Visium dataset for spatially proximal cell-cell...

    • figshare.com
    application/gzip
    Updated Oct 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Suoqin Jin (2023). mouse brain 10X Visium dataset for spatially proximal cell-cell communication analysis [Dataset]. http://doi.org/10.6084/m9.figshare.23621151.v2
    Explore at:
    application/gzipAvailable download formats
    Dataset updated
    Oct 31, 2023
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Suoqin Jin
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    A mouse brain 10X Visium dataset for spatially proximal cell-cell communication analysis using CellChat v2. We download this dataset from https://www.10xgenomics.com/resources/datasets/mouse-brain-serial-section-1-sagittal-anterior-1-standard-1-0-0. Biological annotations of spots (i.e., cell group information) are predicted using Seurat R package.

  14. S

    Future Global Aridity Index and PET Database (CMIP_6)

    • scidb.cn
    Updated Jan 15, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Robert John Zomer; Antonio Trabucco (2024). Future Global Aridity Index and PET Database (CMIP_6) [Dataset]. http://doi.org/10.57760/sciencedb.nbsdc.00086
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jan 15, 2024
    Dataset provided by
    Science Data Bank
    Authors
    Robert John Zomer; Antonio Trabucco
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Global Aridity Index and Potential Evapotranspiration Database: CMIP_6 Future Projections(Future_Global_AI_PET)Robert J. Zomer 1, 2, 3, Antonio Trabucco1,41. Euro-Mediterranean Center on Climate Change, IAFES Division, Sassari, Italy. 2. Centre for Mountain Futures, Kunming Institute of Botany, Chinese Academy of Science, Kunming, Yunnan, China3. CIFOR-ICRAF China Program, World Agroforestry (ICRAF), Kunming, Yunnan. China4. National Biodiversity Future Center (NBFC), Palermo, ItalyThe Global Aridity Index and Potential Evapotranspiration (Global AI-PET) Database: CMIP_6 Future Projections – Version 1 (Future_Global_AI_PET) provides a high-resolution (30 arc-seconds) global raster dataset of average monthly and annual potential evapotransipation (PET) and aridity index (AI) for two historical (1960-1990; 1970-2000) and two future (2021-2040; 2041-2060) time periods for each of 22 CMIP6 Earth System Models across four emission scenarios (SSP: 126, 245, 370, 585). The database also includes three averaged multi-model ensembles produced for each of the four emission scenarios:· All Models: includes all of the 22 ESM, as available within a particular SSP.· High Risk: includes 5 ESM identified as projecting the highest increases in temperature and precipitation and lying outside and significantly higher than the majority of estimates.· Majority Consensus: includes 15 ESM, that is, all available ESM excluding the ESM in the “High Risk” category, and those missing data across all of the 4 SSP. Further herein referred to as the “Consensus” category.These geo-spatial datasets have been produced with the support of Euro-Mediterranean Center on Climate Change, IAFES Division; Centre for Mountain Futures, Kunming Institute of Botany, Chinese Academy of Science; CIFOR-ICRAF China Program, World Agroforestry (CIFOR-ICRAF) and the National Biodiversity Future Center (NBFC).These datasets are provided under a CC_BY 4.0 License (please attribute), in standard GeoTiff format, WGS84 Geographic Coordinate System, 30 arc seconds or ~ 1km at the equator, to support studies contributing to sustainable development, biodiversity and environmental conservation, poverty alleviation, and adaption to climate change, among other global, regional, national, and local concerns.The Future_Global_AI_PET is available online from the Science Data Bank (ScienceDB) at: https://doi.org/10.57760/sciencedb.nbsdc.00086Previous versions of the Global Aridity Index and PET Database are available online here:https://figshare.com/articles/dataset/Global_Aridity_Index_and_Potential_Evapotranspiration_ET0_Climate_Database_v2/7504448/6Technical questions regarding the datasets can be directed to Robert Zomer: r.zomer@mac.com or Antonio Trabucco: antonio.trabucco@cmcc.it Methods:Based on the results of comparative validations, the Hargreaves model has been evaluated as one of the best fit to model PET and Aridity index globally with the available high resolution downscaled and bias corrected climate projections and chosen for the implementation of the Global-AI_PET- CMIP6 Future Projections. This method performs almost as well as the Penman-Monteith method, but requires less parameterization, and has significantly lower sensitivity to error in climatic inputs (Hargreaves and Allen, 2003). The currently available downscaled CMIP6 projections (available from WorldClim) do provide fewer climate variables idoneous for implementation of temperature-based evapotranspiration methods, such as the Hargreaves method. Hargreaves (1985, 1994) uses mean monthly temperature (Tmean), mean monthly temperature range (TD) and extraterrestrial radiation (RA, radiation on top of the atmosphere) to calculate ET0, as shown below: PET = 0.023 * RA * (Tmean + 17.8) * TD0.5where RA is extraterrestrial radiation at the top of the atmosphere, TD is the difference between mean maximum temperatures and mean minimum temperatures (Tmax - Tmin), and Tmean is equal to Tmax + Tmin divided by 2. The Hargreaves equation has been implemented globally on a per grid cell basis at 30 arc seconds resolution (~ 1km2 at the equator), in ArcGIS (v11.1) using Python v3.2 (see code availability section) to estimate PET/AI globally using future projections provided by the CMIP6 collaboration. The data to parametrize the equation were obtained from the Worldclim (worldclim.org) online data repository, which provides bias-corrected downscaled monthly values of minimum temperature, maximum temperature, and precipitation for 25 CMIP6 Earth System Models (ESMs), across four Shared Socio-economic Pathways (SSPs): 126, 245, 370 and 585. PET/AI was estimated for two historical periods, WorldClim 1.4 (1960-1990) and WorldClim 2.1 (1970-2000), representing on average a decades change, by applying the Hargreaves methodology described above. Similarly, PET/AI was estimated for two future time periods, namely 2021-2040 and 2041-2060, for each of the 25 models across their respective four SSP scenarios (126, 245, 370,585). Aridity Index Aridity is often expressed as an Aridity Index, comprised of the ratio of precipitation over PET, and signifying the amount of precipitation available in relation to atmospheric water demand and quantifying the water (from rainfall) availability for plant growth after ET demand has been met, comparing incoming moisture totals with potential outgoing moisture. The AI for the averaged time periods has been calculated on a per grid cell basis, as: AI = MA_Prec/MA_PETwhere: AI = Aridity Index MA_Prec = Mean Annual Precipitation MA_PET = Mean Annual Reference EvapotranspirationUsing the mean annual precipitation (MA_Prec) values obtained from the CMIP6 climate projections, while ET0 datasets estimated on a monthly average basis by the method described above were aggregated to mean annual values (MA_PET). Using this formulation, AI values are unitless, increasing with more humid condition and decreasing with more arid conditions.Multi-Model Averaged EnsemblesBased upon the distribution of the various scenarios along a gradient of their projected temperature and precipitation estimates for the each of the four SSP and two future time period, three multi-model ensembles, each articulated by their four respective SSPs, were identified. The three parameters of monthly minimum temperature, monthly maximum temperature and monthly precipitation for ESM’s included within each of these ensemble categories were averaged for each of their respective SSPs. These averaged parameters were then used to calculate the PET/AI as per the above methodology.Code Availablity:The algorithm and code in Python used to calculate PET and AI is available on Figshare at this link below:https://figshare.com/articles/software/Global_Future_PET_AI_Algorithm_Code_Python_-_Calculate_PET_AI/24978666DATA FORMATPET datasets are available as monthly averages (12 datasets, i.e. one dataset for each month, averaged over the specified time period) or as an annual average (1 dataset) for the specified time period. Aridity Index grid layers are available as one grid layer representing the annual average over the specified period. The following nomenclature is used to describe the dataset: Zipped Files - Directory Names refer to: Model_SSP_Time-PeriodFor example: ACCESS-CM2_126_2021-2040.zip Model: ACCESS-CM2 / SSP:126 / Time-Period: 2021-2040Prefix of Files (TIFFs) is either:pet_ for PET layers aridity_index for Aridity Index (no suffix)Suffix For PET Files is either:1, 2, ... 12 Month of the yearyr Yearly averagesd Standard DeviationExamples:pet_02.tif is the PET average for the month of February.pet_yr.tif is the PET annual average.’pet_sd.tif is the standard deviation of the annual PETaridity_index.tif is the annual aridity index. The PET values are defined as total mm of PET per month or per year. The Aridity Index values are unit-less.The geospatial dataset is in geographic coordinates; datum and spheroid are WGS84; spatial units are decimal degrees. The spatial resolution is 30 arc-seconds or 0.008333 degrees. Arc degrees and seconds are angular distances, and conversion to linear units (like km) varies with latitude, as below:The Future-PET and Future-Aridity Index data layers have been processed and finalized for distribution online as GEO-TIFFs. These datasets have been zipped (.zip) into monthly series or individual annual layers, by each combination of climate model/scenarios, and are available for online access. Data Storage HierarchyThe database is organized for storage into a hierarchy of directories (see ReadMe.doc):( Individual zipped files are generally about 1 GB or less) Associated Peer Reviewed Journal Article:Zomer RJ, Xu J, Spano D and Trabucco A. 2024. CMIP6-based global estimates of future aridity index and potential evapotranspiration for 2021-2060. Open Research Europe 4:157 https://doi.org/10.12688/openreseurope.18110.1For further info, please refer to these earlier paper describing the database and methodology:Zomer, R.J.; Xu, J.; Trabucco, A. 2022. Version 3 of the Global Aridity Index and Potential Evapotranspiration Database. Scientific Data 9, 409.Zomer, R. J; Bossio, D. A.; Trabucco, A.; van Straaten, O.; Verchot, L.V. 2008. Climate Change Mitigation: A Spatial Analysis of Global Land Suitability for Clean Development Mechanism Afforestation and Reforestation. Agric. Ecosystems and Environment. 126:67-80.Trabucco, A.; Zomer, R. J.; Bossio, D. A.; van Straaten, O.; Verchot, L.V. 2008. Climate Change Mitigation through Afforestation / Reforestation: A global analysis of hydrologic

  15. s

    Spatial Multimodal Analysis (SMA) - Spatial Transcriptomics

    • figshare.scilifelab.se
    • researchdata.se
    json
    Updated Jan 15, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Marco Vicari; Reza Mirzazadeh; Anna Nilsson; Patrik Bjärterot; Ludvig Larsson; Hower Lee; Mats Nilsson; Julia Foyer; Markus Ekvall; Paulo Czarnewski; Xiaoqun Zhang; Per Svenningsson; Per Andrén; Lukas Käll; Joakim Lundeberg (2025). Spatial Multimodal Analysis (SMA) - Spatial Transcriptomics [Dataset]. http://doi.org/10.17044/scilifelab.22778920.v1
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Jan 15, 2025
    Dataset provided by
    KTH Royal Institute of Technology, Science for Life Laboratory
    Authors
    Marco Vicari; Reza Mirzazadeh; Anna Nilsson; Patrik Bjärterot; Ludvig Larsson; Hower Lee; Mats Nilsson; Julia Foyer; Markus Ekvall; Paulo Czarnewski; Xiaoqun Zhang; Per Svenningsson; Per Andrén; Lukas Käll; Joakim Lundeberg
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains Spatial Transcriptomics (ST) data matching with Matrix Assisted Laser Desorption/Ionization - Mass Spetrometry Imaging (MALDI-MSI). This data is complementary to data contained in the same project. FIles with the same identifiers in the two datasets originated from the very same tissue section and can be combined in a multimodal ST-MSI object. For more information about the dataset please see our manuscript posted on BioRxiv (doi: https://doi.org/10.1101/2023.01.26.525195). This dataset includes ST data from 19 tissue sections, including human post-mortem and mouse samples. The spatial transcriptomics data was generated using the Visium protocol (10x Genomics). The murine tissue sections come from three different mice unilaterally injected with 6-OHDA. 6-OHDA is a neurotoxin that when injected in the brain can selectively destroy dopaminergic neurons. We used this mouse model to show the applicability of the technology that we developed, named Spatial Multimodal Analysis (SMA). Using our technology on these mouse brain tissue sections we were able to detect both dopamine with MALDI-MSI and the corresponding gene expression with ST. This dataset includes also one human post-mortem striatum sample that was placed on one Visium slide across the four capture areas. This sample was analyzed with a different ST protocol named RRST (Mirzazadeh, R., Andrusivova, Z., Larsson, L. et al. Spatially resolved transcriptomic profiling of degraded and challenging fresh frozen samples. Nat Commun 14, 509 (2023). https://doi.org/10.1038/s41467-023-36071-5), where probes capturing the whole transcriptome are first hybridized in the tissue section and then spatially detected. Each tissue section contained in the dataset has been given a unique identifier that is composed of the Visium array ID and capture area ID of the Visium slide that the tissue section was placed on. This unique identifier is included in the file names of all the files relative to the same tissue section, including the MALDI-MSI files published in the other dataset included in this project. In this dataset you will find the following files for each tissue section: - raw files: these are the read one fastq files (containing the pattern *R1*fastq.gz in the file name), read two fastq files (containing the pattern *R1*fastq.gz in the file name) and the raw microscope images (containing the pattern Spot.jpg in the file name). These are the only files needed to run the Space Ranger pipeline, which is freely available for any user (please see the 10x Genomics website for information on how to install and run Space Ranger); - processed data files: we provide processed data files of two types: a) Space Ranger outputs that were used to produce the figures in our publication; b) manual annotation tables in csv format produced using Loupe Browser 6 (csv tables with file names ending _RegionLoupe.csv, _filter.csv, _dopamine.csv, _lesion.csv, _region.csv patterns); c) json files that we used as input for Space Ranger in the cases where the automatic tissue detection included in the pipeline failed to recognize the tissue or the fiducials. Using these processed files the user can reproduce the figures of our publication without having to restart from the raw data files. The MALDI-MSI analyses preceding ST was performed with different matrices in different tissue section. We used 1) 9-aminoacridine (9-AA) for detection of metabolites in negative ionization mode, 2) 2,5-dihydroxybenzoic acid (DHB) for detection of metabolites in positive ionization mode, 3) 4-(anthracen-9-yl)-2-fluoro-1-ethylpyridin-1-ium iodide (FMP-10), which charge-tags molecules with phenolic hydroxyls and/or primary amines, including neurotransmitters. The information about which matrix was sprayed on the tissue sections and other information about the samples is included in the metadata table. We also used three types of control samples: - standard Visium: samples processed with standard Visium (i.e. no matrix spraying, no MALDI-MSI, protocol as recommended by 10x Gemomics with no exeptions) - internal controls (iCTRL): samples not sprayed with any matrix, neither processed with MALDI-MSI, but located on the same Visium slide were other samples were processed with MALDI-MSI - FMP-10-iCTRL: sample sprayed with FMP-10, and then processed as an iCTRL. This and other information is provided in the metadata table.

  16. R 824 s 2008 - Dataset - DIP Lab

    • ckan.diplab.uplb.edu.ph
    Updated Mar 9, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    ckan.diplab.uplb.edu.ph (2023). R 824 s 2008 - Dataset - DIP Lab [Dataset]. https://ckan.diplab.uplb.edu.ph/dataset/r-824-s-2008
    Explore at:
    Dataset updated
    Mar 9, 2023
    Dataset provided by
    CKANhttps://ckan.org/
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Description

    Amending Rule II Section 5.C.2.B And C, Section 6.A.2.H, Section 6.B.1.G And Section 6.B.2.A-B And C.6 And Section 19 Item J And Deleting Section 6.A.2.F And Rule V Section 19 Item I Of The Implementing Rules And Regulations Of Bp 220

  17. d

    Data from: The Acidity of Atmospheric Particles and Clouds

    • datasets.ai
    • catalog.data.gov
    57
    Updated Aug 27, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Environmental Protection Agency (2024). The Acidity of Atmospheric Particles and Clouds [Dataset]. https://datasets.ai/datasets/the-acidity-of-atmospheric-particles-and-clouds
    Explore at:
    57Available download formats
    Dataset updated
    Aug 27, 2024
    Dataset authored and provided by
    U.S. Environmental Protection Agency
    Description

    Dataset contains supplementary information (model inputs and/or outputs and literature values) for Section 4.1 (idealized acidity calculations), Section 4.2 (box model calculations of pH for ambient conditions), Section 7.1 (observed aerosol pH values), Section 7.2 (observed cloud pH values), and Section 8.1 (CMAQ hemispheric predictions).

    This dataset is associated with the following publication: Pye, H., A. Nenes, B. Alexander, A. Ault, M. Barth, S. Clegg, J. Collett, K. Fahey, C. Hennigan, H. Herrmann, M. Kanakidou, J. Kelly, I. Ku, V.F. McNeill, N. Riemer, T. Schaefer, G. Shi, A. Tilgner, J. Walker, T. Wang, R. Weber, J. Xing, R. Zaveri, and A. Zuend. The Acidity of Atmospheric Particles and Clouds. Atmospheric Chemistry and Physics. Copernicus Publications, Katlenburg-Lindau, GERMANY, 20(8): 4809–4888, (2020).

  18. m

    Copernicus Digital Elevation Model (DEM) for Europe at 3 arc seconds (ca. 90...

    • data.mundialis.de
    • data.opendatascience.eu
    • +3more
    Updated Feb 23, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2022). Copernicus Digital Elevation Model (DEM) for Europe at 3 arc seconds (ca. 90 meter) resolution derived from Copernicus Global 30 meter DEM dataset [Dataset]. http://doi.org/10.5281/zenodo.6211701
    Explore at:
    Dataset updated
    Feb 23, 2022
    Description

    The Copernicus DEM is a Digital Surface Model (DSM) which represents the surface of the Earth including buildings, infrastructure and vegetation. The original GLO-30 provides worldwide coverage at 30 meters (refers to 10 arc seconds). Note that ocean areas do not have tiles, there one can assume height values equal to zero. Data is provided as Cloud Optimized GeoTIFFs. Note that the vertical unit for measurement of elevation height is meters. The Copernicus DEM for Europe at 3 arcsec (0:00:03 = 0.00083333333 ~ 90 meter) in COG format has been derived from the Copernicus DEM GLO-30, mirrored on Open Data on AWS, dataset managed by Sinergise (https://registry.opendata.aws/copernicus-dem/). Processing steps: The original Copernicus GLO-30 DEM contains a relevant percentage of tiles with non-square pixels. We created a mosaic map in https://gdal.org/drivers/raster/vrt.html format and defined within the VRT file the rule to apply cubic resampling while reading the data, i.e. importing them into GRASS GIS for further processing. We chose cubic instead of bilinear resampling since the height-width ratio of non-square pixels is up to 1:5. Hence, artefacts between adjacent tiles in rugged terrain could be minimized: gdalbuildvrt -input_file_list list_geotiffs_MOOD.csv -r cubic -tr 0.000277777777777778 0.000277777777777778 Copernicus_DSM_30m_MOOD.vrt In order to reduce the spatial resolution to 3 arc seconds, weighted resampling was performed in GRASS GIS (using r.resamp.stats -w) and the pixel values were scaled with 1000 (storing the pixels as integer values) for data volume reduction. In addition, a hillshade raster map was derived from the resampled elevation map (using r.relief, GRASS GIS). Eventually, we exported the elevation and hillshade raster maps in Cloud Optimized GeoTIFF (COG) format, along with SLD and QML style files.

  19. f

    Prior literature relating fixation duration to fatigue or time-on-task...

    • plos.figshare.com
    xls
    Updated Sep 16, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lee Friedman; Oleg V. Komogortsev (2024). Prior literature relating fixation duration to fatigue or time-on-task (TOT). [Dataset]. http://doi.org/10.1371/journal.pone.0310436.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Sep 16, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Lee Friedman; Oleg V. Komogortsev
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Prior literature relating fixation duration to fatigue or time-on-task (TOT).

  20. d

    AUS Soil and Landscape Grid National Soil Attribute Maps - Depth of Regolith...

    • data.gov.au
    • researchdata.edu.au
    • +1more
    zip
    Updated Nov 20, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bioregional Assessment Program (2019). AUS Soil and Landscape Grid National Soil Attribute Maps - Depth of Regolith (3" resolution) - Release 2 [Dataset]. https://data.gov.au/data/dataset/c28597e8-8cfc-4b4f-8777-c9934051cce2
    Explore at:
    zip(12907403160)Available download formats
    Dataset updated
    Nov 20, 2019
    Dataset provided by
    Bioregional Assessment Program
    License

    Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
    License information was derived automatically

    Description

    Abstract

    This dataset and its metadata statement were supplied to the Bioregional Assessment Programme by a third party and are presented here as originally supplied.

    This is Version 2 of the Australian Soil Depth of Regolith product of the Soil and Landscape Grid of Australia (produced 2015-06-01). The Soil and Landscape Grid of Australia has produced a range of digital soil attribute products. The digital soil attribute maps are in raster format at a resolution of 3 arc sec (~90 x 90 m pixels). Attribute Definition: The regolith is the in situ and transported material overlying unweathered bedrock; Units: metres; Spatial prediction method: data mining using piecewise linear regression; Period (temporal coverage; approximately): 1900-2013; Spatial resolution: 3 arc seconds (approx 90m); Total number of gridded maps for this attribute:3; Number of pixels with coverage per layer: 2007M (49200 * 40800); Total size before compression: about 8GB; Total size after compression: about 4GB; Data license : Creative Commons Attribution 3.0 (CC By); Variance explained (cross-validation): R^2 = 0.38; Target data standard: GlobalSoilMap specifications; Format: GeoTIFF.

    Dataset History

    The methodology consisted of the following steps: (i) drillhole data preparation, (ii) compilation and selection of the environmental covariate raster layers and (iii) model implementation and evaluation. Drillhole data preparation: Drillhole data was sourced from the National Groundwater Information System (NGIS) database. This spatial database holds nationally consistent information about bores that were drilled as part of the Bore Construction Licensing Framework (http://www.bom.gov.au/water/groundwater/ngis/). The database contains 357,834 bore locations with associated lithology, bore construction and hydrostratigraphy records. This information was loaded into a relational database to facilitate analysis. Regolith depth extraction: The first step was to recognise and extract the boundary between the regolith and bedrock within each drillhole record. This was done using a key word look-up table of bedrock or lithology related words from the record descriptions. 1,910 unique descriptors were discovered. Using this list of new standardised terms analysis of the drillholes was conducted, and the depth value associated with the word in the description that was unequivocally pointing to reaching fresh bedrock material was extracted from each record using a tool developed in C# code. The second step of regolith depth extraction involved removal of drillhole bedrock depth records deemed necessary because of the "noisiness" in depth records resulting from inconsistencies we found in drilling and description standards indentified in the legacy database. On completion of the filtering and removal of outliers the drillhole database used in the model comprised of 128,033 depth sites. Selection and preparation of environmental covariates The environmental correlations style of DSM applies environmental covariate datasets to predict target variables, here regolith depth. Strongly performing environmental covariates operate as proxies for the factors that control regolith formation including climate, relief, parent material organisms and time (Jenny, 1941 Depth modelling was implemented using the PC-based R-statistical software (R Core Team, 2014), and relied on the R-Cubist package (Kuhn et al. 2013). To generate modelling uncertainty estimates, the following procedures were followed: (i) the random withholding of a subset comprising 20% of the whole depth record dataset for external validation; (ii) Bootstrap sampling 100 times of the remaining dataset to produce repeated model training datasets, each time. The Cubist model was then run repeated times to produce a unique rule set for each of these training sets. Repeated model runs using different training sets, a procedure referred to as bagging or bootstrap aggregating, is a machine learning ensemble procedure designed to improve the stability and accuracy of the model. The Cubist rule sets generated were then evaluated and applied spatially calculating a mean predicted value (i.e. the final map). The 5% and 95% confidence intervals were estimated for each grid cell (pixel) in the prediction dataset by combining the variance from the bootstrapping process and the variance of the model residuals. Version 2 differs from version 1, in that the modelling of depths was performed on the log scale to better conform to assumptions of normality used in calculating the confidence intervals. The method to estimate the confidence intervals was improved to better represent the full range of variability in the modelling process. (Wilford et al, in press)

    Dataset Citation

    CSIRO (2015) AUS Soil and Landscape Grid National Soil Attribute Maps - Depth of Regolith (3" resolution) - Release 2. Bioregional Assessment Source Dataset. Viewed 22 June 2018, http://data.bioregionalassessments.gov.au/dataset/c28597e8-8cfc-4b4f-8777-c9934051cce2.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Susanna-Assunta Sansone; Philippe Rocca-Serra; Pawel Krajewski; Hanna Ćwiek-Kupczyńska; Alejandra Gonzalez-Beltran; Emilie J. Millet; Katarzyna Filipiak; Agnieszka Ławrynowicz; Augustyn Markiewicz; Fred van Eeuwijk (2023). Additional Examples [Dataset]. http://doi.org/10.6084/m9.figshare.11819274.v1
Organization logo

Data from: Additional Examples

Related Article
Explore at:
zipAvailable download formats
Dataset updated
Jun 1, 2023
Dataset provided by
Figsharehttp://figshare.com/
Authors
Susanna-Assunta Sansone; Philippe Rocca-Serra; Pawel Krajewski; Hanna Ćwiek-Kupczyńska; Alejandra Gonzalez-Beltran; Emilie J. Millet; Katarzyna Filipiak; Agnieszka Ławrynowicz; Augustyn Markiewicz; Fred van Eeuwijk
License

CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically

Description

The collection contains two sets of examples: 1. examplary RDF datasets, demontrating generation of semantic models for the analyses described in Results/Exemplary analyses section with 'SemLMM' R package, 2. exemplary SPARQL queries, implementing the use cases discussed in Results/Exemplary queries section.

Search
Clear search
Close search
Google apps
Main menu