16 datasets found
  1. u

    An Atlas Based on the 'COADS' Data Set: Fields of Mean Wind, Cloudiness and...

    • rda-web-prod.ucar.edu
    • data.ucar.edu
    • +2more
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    An Atlas Based on the 'COADS' Data Set: Fields of Mean Wind, Cloudiness and Humidity at the Surface of the Global Ocean [Dataset]. https://rda-web-prod.ucar.edu/#!lfd?nb=y&b=topic&v=Atmosphere
    Explore at:
    Description

    Monthly global grids of data, derived fluxes, and anomalies were prepared from the COADS data.

  2. n

    Kaplan Global Sea Level Pressure (SLP) Anomalies from LDEO/IRI Climate Data...

    • access.earthdata.nasa.gov
    • cmr.earthdata.nasa.gov
    Updated Apr 21, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2017). Kaplan Global Sea Level Pressure (SLP) Anomalies from LDEO/IRI Climate Data Library [Dataset]. https://access.earthdata.nasa.gov/collections/C1214608718-SCIOPS
    Explore at:
    Dataset updated
    Apr 21, 2017
    Time period covered
    Apr 1, 1854 - Dec 31, 1992
    Area covered
    Description

    A Reduced Space Optimal Interpolation procedure has been applied to the global sea level pressure (SLP) record from the Comprehensive Ocean Atmosphere Data Set (COADS) averaged on a 4x4 degree grid. The SLP anomalies are with respect to the climatological annual cycle estimated from COADS data for the period 1951-1980. The data are presented as a monthly climatology.

    Additional Kaplan SLP data include: - Optimal Interpolation 1854-1992 - Projected SLP anomalies based on linear best fit of the EOF patterns to the data 1854-1992 - Kaplan RF and KF Analysis errors and estimates of SLP 1854-1992

  3. A

    Surface Observations - Synoptic Code

    • data.amerigeoss.org
    • data.cnra.ca.gov
    • +2more
    html
    Updated Aug 18, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    United States (2022). Surface Observations - Synoptic Code [Dataset]. https://data.amerigeoss.org/dataset/surface-observations-synoptic-code-fccb7
    Explore at:
    htmlAvailable download formats
    Dataset updated
    Aug 18, 2022
    Dataset provided by
    United States
    Description

    Daily weather observations from global land stations, recorded in synoptic code. Period of record 1950 only.

  4. Z

    Codes in R for spatial statistics analysis, ecological response models and...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Feb 6, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Espinosa, S. (2023). Codes in R for spatial statistics analysis, ecological response models and spatial distribution models [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7603556
    Explore at:
    Dataset updated
    Feb 6, 2023
    Dataset provided by
    Rössel-Ramírez, D. W.
    Espinosa, S.
    Martínez-Montoya, J. F.
    Palacio-Núñez, J.
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    In the last decade, a plethora of algorithms have been developed for spatial ecology studies. In our case, we use some of these codes for underwater research work in applied ecology analysis of threatened endemic fishes and their natural habitat. For this, we developed codes in Rstudio® script environment to run spatial and statistical analyses for ecological response and spatial distribution models (e.g., Hijmans & Elith, 2017; Den Burg et al., 2020). The employed R packages are as follows: caret (Kuhn et al., 2020), corrplot (Wei & Simko, 2017), devtools (Wickham, 2015), dismo (Hijmans & Elith, 2017), gbm (Freund & Schapire, 1997; Friedman, 2002), ggplot2 (Wickham et al., 2019), lattice (Sarkar, 2008), lattice (Musa & Mansor, 2021), maptools (Hijmans & Elith, 2017), modelmetrics (Hvitfeldt & Silge, 2021), pander (Wickham, 2015), plyr (Wickham & Wickham, 2015), pROC (Robin et al., 2011), raster (Hijmans & Elith, 2017), RColorBrewer (Neuwirth, 2014), Rcpp (Eddelbeuttel & Balamura, 2018), rgdal (Verzani, 2011), sdm (Naimi & Araujo, 2016), sf (e.g., Zainuddin, 2023), sp (Pebesma, 2020) and usethis (Gladstone, 2022).

    It is important to follow all the codes in order to obtain results from the ecological response and spatial distribution models. In particular, for the ecological scenario, we selected the Generalized Linear Model (GLM) and for the geographic scenario we selected DOMAIN, also known as Gower's metric (Carpenter et al., 1993). We selected this regression method and this distance similarity metric because of its adequacy and robustness for studies with endemic or threatened species (e.g., Naoki et al., 2006). Next, we explain the statistical parameterization for the codes immersed in the GLM and DOMAIN running:

    In the first instance, we generated the background points and extracted the values of the variables (Code2_Extract_values_DWp_SC.R). Barbet-Massin et al. (2012) recommend the use of 10,000 background points when using regression methods (e.g., Generalized Linear Model) or distance-based models (e.g., DOMAIN). However, we considered important some factors such as the extent of the area and the type of study species for the correct selection of the number of points (Pers. Obs.). Then, we extracted the values of predictor variables (e.g., bioclimatic, topographic, demographic, habitat) in function of presence and background points (e.g., Hijmans and Elith, 2017).

    Subsequently, we subdivide both the presence and background point groups into 75% training data and 25% test data, each group, following the method of Soberón & Nakamura (2009) and Hijmans & Elith (2017). For a training control, the 10-fold (cross-validation) method is selected, where the response variable presence is assigned as a factor. In case that some other variable would be important for the study species, it should also be assigned as a factor (Kim, 2009).

    After that, we ran the code for the GBM method (Gradient Boost Machine; Code3_GBM_Relative_contribution.R and Code4_Relative_contribution.R), where we obtained the relative contribution of the variables used in the model. We parameterized the code with a Gaussian distribution and cross iteration of 5,000 repetitions (e.g., Friedman, 2002; kim, 2009; Hijmans and Elith, 2017). In addition, we considered selecting a validation interval of 4 random training points (Personal test). The obtained plots were the partial dependence blocks, in function of each predictor variable.

    Subsequently, the correlation of the variables is run by Pearson's method (Code5_Pearson_Correlation.R) to evaluate multicollinearity between variables (Guisan & Hofer, 2003). It is recommended to consider a bivariate correlation ± 0.70 to discard highly correlated variables (e.g., Awan et al., 2021).

    Once the above codes were run, we uploaded the same subgroups (i.e., presence and background groups with 75% training and 25% testing) (Code6_Presence&backgrounds.R) for the GLM method code (Code7_GLM_model.R). Here, we first ran the GLM models per variable to obtain the p-significance value of each variable (alpha ≤ 0.05); we selected the value one (i.e., presence) as the likelihood factor. The generated models are of polynomial degree to obtain linear and quadratic response (e.g., Fielding and Bell, 1997; Allouche et al., 2006). From these results, we ran ecological response curve models, where the resulting plots included the probability of occurrence and values for continuous variables or categories for discrete variables. The points of the presence and background training group are also included.

    On the other hand, a global GLM was also run, from which the generalized model is evaluated by means of a 2 x 2 contingency matrix, including both observed and predicted records. A representation of this is shown in Table 1 (adapted from Allouche et al., 2006). In this process we select an arbitrary boundary of 0.5 to obtain better modeling performance and avoid high percentage of bias in type I (omission) or II (commission) errors (e.g., Carpenter et al., 1993; Fielding and Bell, 1997; Allouche et al., 2006; Kim, 2009; Hijmans and Elith, 2017).

    Table 1. Example of 2 x 2 contingency matrix for calculating performance metrics for GLM models. A represents true presence records (true positives), B represents false presence records (false positives - error of commission), C represents true background points (true negatives) and D represents false backgrounds (false negatives - errors of omission).

    Validation set

    Model

    True

    False

    Presence

    A

    B

    Background

    C

    D

    We then calculated the Overall and True Skill Statistics (TSS) metrics. The first is used to assess the proportion of correctly predicted cases, while the second metric assesses the prevalence of correctly predicted cases (Olden and Jackson, 2002). This metric also gives equal importance to the prevalence of presence prediction as to the random performance correction (Fielding and Bell, 1997; Allouche et al., 2006).

    The last code (i.e., Code8_DOMAIN_SuitHab_model.R) is for species distribution modelling using the DOMAIN algorithm (Carpenter et al., 1993). Here, we loaded the variable stack and the presence and background group subdivided into 75% training and 25% test, each. We only included the presence training subset and the predictor variables stack in the calculation of the DOMAIN metric, as well as in the evaluation and validation of the model.

    Regarding the model evaluation and estimation, we selected the following estimators:

    1) partial ROC, which evaluates the approach between the curves of positive (i.e., correctly predicted presence) and negative (i.e., correctly predicted absence) cases. As farther apart these curves are, the model has a better prediction performance for the correct spatial distribution of the species (Manzanilla-Quiñones, 2020).

    2) ROC/AUC curve for model validation, where an optimal performance threshold is estimated to have an expected confidence of 75% to 99% probability (De Long et al., 1988).

  5. Coupled Model Intercomparison Project Phase 5 (CMIP5) University of...

    • registry.opendata.aws
    Updated Mar 14, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NOAA (2022). Coupled Model Intercomparison Project Phase 5 (CMIP5) University of Wisconsin-Madison Probabilistic Downscaling Dataset [Dataset]. https://registry.opendata.aws/noaa-uwpd-cmip5/
    Explore at:
    Dataset updated
    Mar 14, 2022
    Dataset provided by
    National Oceanic and Atmospheric Administrationhttp://www.noaa.gov/
    Area covered
    Madison, Wisconsin
    Description

    The University of Wisconsin Probabilistic Downscaling (UWPD) is a statistically downscaled dataset based on the Coupled Model Intercomparison Project Phase 5 (CMIP5) climate models. UWPD consists of three variables, daily precipitation and maximum and minimum temperature. The spatial resolution is 0.1°x0.1° degree resolution for the United States and southern Canada east of the Rocky Mountains.

    The downscaling methodology is not deterministic. Instead, to properly capture unexplained variability and extreme events, the methodology predicts a spatially and temporally varying Probability Density Function (PDF) for each variable. Statistics such as the mean, mean PDF and annual maximum statistics can be calculated directly from the daily PDF and these statistics are included in the dataset. In addition, “standard”, “raw” data is created by randomly sampling from the PDFs to create a “realization” of the local scale given the large-scale from the climate model. There are 3 realizations for temperature and 14 realizations for precipitation.

    The directory structure of the data is as follows
    [cmip_version]/[scenario]/[climate_model]/[ensemble_member]/
    The realizations are as follows
    prcp_[realization_number][year].nc temp[realization_number][year].nc
    The time mean files averaged over certain year bounds are as follows
    prcp_mean
    [year_bound_1][year_bound_2].nc temp_mean[year_bound_1][year_bound_2].nc
    The time-mean Cumulative Distribution Function (CDF) files are as follows
    prcp_cdf
    [year_bound_1][year_bound_2].nc temp_cdf[year_bound_1][year_bound_2].nc
    The CDF of the annual maximum precipitation is given for each year in the record prcp_annual_max_cdf
    [start_year_of_scenario]_[end_year_of_scenario].nc

  6. g

    Data and Associated Code for Projections of Unimpaired Flows, Storage, and...

    • gimi9.com
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Data and Associated Code for Projections of Unimpaired Flows, Storage, and Managed Flows for Climate Change Scenarios in the San Francisco Bay-Delta Watershed, California | gimi9.com [Dataset]. https://gimi9.com/dataset/data-gov_data-and-associated-code-for-projections-of-unimpaired-flows-storage-and-managed-flows-for/
    Explore at:
    Area covered
    San Francisco Bay, California
    Description

    This data release includes data containing projections of unimpaired hydrology, reservoir storage, and downstream managed flows in the Sacramento River/San Joaquin River watershed, California for scenarios of future climate change generated for the CASCaDE2 project (Computational Assessments of Scenarios of Change for the Delta Ecosystem, phase 2). Code used to produce the data is also included. The dataset is produced using a multiple-model approach. First, downscaled global climate model outputs are used to drive an existing Variable Infiltration Capacity/Variable Infiltration Capacity Routing (VIC/RVIC) model of Sacramento/San Joaquin hydrology, resulting in projections of daily, unimpaired flows throughout the watershed. A management model, CASCaDE2-modified CalSim (C2-CalSim), uses these projections as inputs and produces monthly estimates of reservoir and other infrastructure operations and resulting downstream managed flows. The CASCaDE2 resampling algorithm (CRESPI), also uses the projected daily unimpaired flows, along with historical managed flows, to estimate the daily variability in managed flows throughout the watershed. The monthly and daily managed-flow estimates are combined in a way that preserves the multi-decadal variability and century-scale trends produced by the C2-CalSim model and the day-to-day variability produced by the CRESPI algorithm. The resulting data are analyzed and processed to produce tables, figures, and text for the associated publications. To reduce the data release's size, data from a given step in the analysis that are not used in a subsequent step have not been included in this data release. All code generated by the USGS to produce the data in this data release is also included. This includes all code to download and preprocess external data; to set up and control the RVIC model runs; to modify, set up, and control runs of the CalSim 2 model; to implement and run the CRESPI algorithm; to postprocess and analyze model outputs; and to produce published figures, tables and text that includes calculated values. A detailed README file is included with instructions for running the code, including how to obtain the external RVIC and CalSim 2 models.

  7. g

    PARQUET - Basic climatological data - monthly - daily - hourly - 6 minutes...

    • gimi9.com
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    PARQUET - Basic climatological data - monthly - daily - hourly - 6 minutes (parquet format) [Dataset]. https://gimi9.com/dataset/eu_66159f1bf0686eb4806508e1
    Explore at:
    Description

    Format .parquet This dataset gathers data in .parquet format. Instead of having a .csv.gz per department per period, all departments are grouped into a single file per period. When possible (depending on the size), several periods are grouped in the same file. ### Data origin The data come from: - Basic climatological data - monthly - Basic climatological data - daily - Basic climatological data - times - Basic climatological data - 6 minutes ### Data preparation The files ending with .prepared have undergone slight preparation steps: - deleting spaces in the name of columns - typing (flexible) The data are typed according to: - date (YYYYMM, YYYMMDD, YYYYMMDDDDH, YYYYMMDDDDHMN): integer - NUM_POST' : string -USUAL_NAME: string - "LAT": float -LON: float -ALTI: integer - if the column begins withQ(‘quality’) orNB` (‘number’): integer ### Update The data are updated at least once a week (depending on my availability) on the data for the period ‘latest-2023-2024’. If you have specific needs, feel free to get closer to me. ### Re-use: Meteo Squad These files are used in the Meteo Squad web application: https://www.meteosquad.com ### Contact If you have specific requests, please do not hesitate to contact me: contact@mistermeteo.com

  8. n

    Data and code from: Breakdown in seasonal dynamics of subtropical ant...

    • data.niaid.nih.gov
    • search.dataone.org
    • +1more
    zip
    Updated Oct 3, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jamie Kass; Masashi Yoshimura; Masako Ogasawara; Mayuko Suwabe; Francisco Hita Garcia; Georg Fischer; Kenneth Dudley; Ian Donohue; Evan Economo (2023). Data and code from: Breakdown in seasonal dynamics of subtropical ant communities with land-cover change [Dataset]. http://doi.org/10.5061/dryad.zkh1893fk
    Explore at:
    zipAvailable download formats
    Dataset updated
    Oct 3, 2023
    Dataset provided by
    Trinity College Dublin
    Okinawa Institute of Science and Technology Graduate University
    Authors
    Jamie Kass; Masashi Yoshimura; Masako Ogasawara; Mayuko Suwabe; Francisco Hita Garcia; Georg Fischer; Kenneth Dudley; Ian Donohue; Evan Economo
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    Concerns about widespread human-induced declines in insect populations are mounting, yet little is known about how land-use change modifies the dynamics of insect communities, particularly in understudied regions. Here, we examine how the seasonal activity patterns of ants—key drivers of terrestrial ecosystem functioning—vary with anthropogenic land-cover change on a subtropical island landscape, and whether differences in temperature or species composition can explain observed patterns. Using trap captures sampled biweekly over two years from a biodiversity monitoring network covering Okinawa Island, Japan, we processed 1.2 million individuals and reconstructed activity patterns within and across habitat types. Forest communities exhibited greater temporal variability of activity than those in more developed areas. Using time-series decomposition to deconstruct this pattern, we found that sites with greater human development exhibited ant communities with diminished seasonality, reduced synchrony, and higher stochasticity compared to sites with greater forest cover. Our results cannot be explained by variation in regional or site temperature patterns, or by differences in species richness or composition among sites. Our study raises the possibility that disruptions to natural seasonal patterns of functionally key insect communities may comprise an important and underappreciated consequence of global environmental change that must be better understood across Earth’s biomes. Methods The ant activity data used in the analysis was collected with Sea, Land, and Air Malaise (SLAM) traps on Okinawa Island in Japan from 2016-2018, and was processed using the code provided in the Zenodo archive (see README for links and the paper for references). Other datasets come from the Japan Meterological Agency (JMA), in situ climate variables for sampling stations measured on-site, and land-cover data for Okinawa developed by our team. The data used in the analysis is included in the data upload (with JMA and land-cover data files in a Zenodo supplemental information under the CC-By 4 license), and all the analysis code is included in the R package provided in a separate Zenodo software archive. NOTE: All datasets (from Dryad and Zenodo) must be put into a single folder called /data within the main directory of the R analysis package before running any code. Please consult the README for further details on the data and code.

  9. e

    Data and Code for npj climate and atmospheric science article "Incomplete...

    • b2find.eudat.eu
    • researchdata.tuwien.ac.at
    Updated Apr 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Data and Code for npj climate and atmospheric science article "Incomplete mass closure in atmospheric nanoparticle growth" [Dataset]. https://b2find.eudat.eu/dataset/05ab9204-e5be-5a18-b02b-78cc8786d74a
    Explore at:
    Dataset updated
    Apr 4, 2025
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    Dataset and code for article "Incomplete mass closure in atmospheric nanoparticle growth" The article is published in npj climate and atmospheric science under the DOI: 10.1038/s41612-025-00893-5. Context and methodology The dataset is atmospheric science data. The dataset contains new particle formation and growth data collected in Hyytiälä, Finland, Beijing, China, and Po-Valley Italy as well as laboratory data from the CERN CLOUD experiment. The dataset stores the most relevant data for the above referenced publication and provides the code for recreating the main figures of the manuscript. After data collection the raw data was analyzed as described in the publication. In this repository, the processed data related to the three main Figures of the article is stored. Technical details There are two zip files in this dataset. One for the data and code related to the figures and one for the aerosol growth model which is the basis for many calculations within the analysis workflow. figure-data-and-code-v1.0.0.zip contains three python files to recreate the main figures of the mansucript. In the subfolder ./data all relevant processed data is stored. aerosolpy-release-v1.0.1.zip contains the python package aerosolpy (https://github.com/DominikStolzenburg/aerosolpy) which was used for the aerosol growth model simulations to create the figures in the above referenced manuscript. The zip file contains a complete image of the python module at the moment of the publication of the article. For follow ups refer to the GitHub page.

  10. Zodiac Model - DNS code for analysing ice-ocean boundary layer turbulence

    • researchdata.edu.au
    • data.aad.gov.au
    Updated Feb 13, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    MONDAL, MAINAK; GAYEN, BISAKHDATTA; Gayen, B. and Mondal, M.; MONDAL, MAINAK (2020). Zodiac Model - DNS code for analysing ice-ocean boundary layer turbulence [Dataset]. https://researchdata.edu.au/zodiac-dns-code-boundary-layer/1444377
    Explore at:
    Dataset updated
    Feb 13, 2020
    Dataset provided by
    Australian Antarctic Divisionhttps://www.antarctica.gov.au/
    Australian Antarctic Data Centre
    Authors
    MONDAL, MAINAK; GAYEN, BISAKHDATTA; Gayen, B. and Mondal, M.; MONDAL, MAINAK
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jul 1, 2016 - Jun 30, 2019
    Area covered
    Description

    From the project summary:
    Melting of grounded ice mass in the Arctic and Antarctic may play a key role in future global sea level rise and Earth's climate system. The Antarctic Ice Sheet is losing ice at an increasing rate, primarily due to ocean-driven melting beneath ice shelves. The physics of these ocean-ice interactions is poorly understood which, along with limited observation constraints, leads to uncertainties in the predictions of future melt rate. We will undertake cutting edge direct numerical simulations to examine the complex dynamics of melting of ice-shelves in the presence of convection and turbulence. The project will provide the knowledge base for improved representation of these Antarctic processes in future global ocean models from which more accurate projections of future climate and sea level will follow.

    Description of the Code: Zodiac is a pseudo spectral code with adjustable choices of domain periodicity. It is mainly developed for analysing boundary layer turbulence with complex geometry. It uses a Rk3 solver, ADI for the the nonlinear terms and Multi-grid for convergence. It is MPI parallelised and the domain-decomposition can be customised based on the needs. The structure of the code is briefly described here. zodiac.F90: It is the main program that initialise the code and get input parameters (input.dat, grid_def), calls in the solver, manages output. duct.F90: solves the RK3 steps of the NS equations, uses FFT, calls in ADI for the implicit non-linear part, uses Multi-grid for faster convergence, deals with different forcing (written inside the same files as subroutines, viz. wave_forcing) boundary.F90: Provides the boundary conditions, in the present case it has been used to study the melting problem in the ice-water interface. It can be customised based on the problem). flow_statistics.F90: Deals with the tke and energy budget of the flow. It also stitches the decomposed domain from other processors. flow_output.F90: Deals with the output, presently it can output in .plt (for tecplot, that need tecio64.a files), .vtk(for preview) and also for netcdf(to be added as a patch). GRID_XY.dat: It is the main grid file for the present problem, that can be customized. Required Libraries: gcc/4.9.3 (or higher) netcdf-fortran fftw2 (the code is not compatible with fftw3) openmpi(1.10.2 or higher, for multiprocessing) tecplot library file (tecio64.a) The uses needs to create the blank directories: plane_data, plane_tke, last_saved The code can be downloaded from the github repository: https://github.com/mmainak/zodiac2

  11. d

    NODC Standard Product: NODC Taxonomic Code on CD-ROM (NCEI Accession...

    • catalog.data.gov
    • dataone.org
    Updated Jul 1, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (Point of Contact) (2025). NODC Standard Product: NODC Taxonomic Code on CD-ROM (NCEI Accession 0050418) [Dataset]. https://catalog.data.gov/dataset/nodc-standard-product-nodc-taxonomic-code-on-cd-rom-ncei-accession-0050418
    Explore at:
    Dataset updated
    Jul 1, 2025
    Dataset provided by
    (Point of Contact)
    Description

    The content of the NODC Taxonomic Code, Version 8 CD-ROM (CD-ROM NODC-68) distributed by NODC is archived in this accession. Version 7 of the NODC Taxonomic Code (CD-ROM NODC-35), which does not include Integrated Taxonomic Information System (ITIS) Taxonomic Serial Numbers (TSNs), is also archived in this NODC accession. Prior to 1996, the NODC Taxonomic Code was the largest, most flexible, and widely used of the various coding schemes which adapted the Linnean system of biological nomenclature to modern methods of data storage and retrieval. It was based on a system of code numbers that reflected taxonomic relationships. Hundreds of historic data collections archived at NODC use the NODC Taxonomic Code to encode species identification. With the development and release of ITIS in 1996, NODC published the final version (Version 8) of the NODC Taxonomic Code on CD-ROM. This CD-ROM, provides NODC taxonomic codes along with the equivalent ITIS Taxonomic Serial Numbers to facilitate the transition to a new Integrated Taxonomic Information System (ITIS, http://www.itis.gov/). With the publication of NODC Taxonomic Code Version 8, the NODC code was frozen and discontinued. ITIS assumed responsibility for assigning new TSN codes and for verifying accepted scientific names and synonyms. More information about the Integrated Taxonomic Information System is available at http://www.itis.gov.

  12. h

    code-search-net-go

    • huggingface.co
    Updated May 18, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Fernando Tarin Morales (2023). code-search-net-go [Dataset]. https://huggingface.co/datasets/Nan-Do/code-search-net-go
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 18, 2023
    Authors
    Fernando Tarin Morales
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    Dataset Card for "code-search-net-go"

      Dataset Summary
    

    This dataset is the Go portion of the CodeSarchNet annotated with a summary column.The code-search-net dataset includes open source functions that include comments found at GitHub.The summary is a short description of what the function does.

      Languages
    

    The dataset's comments are in English and the functions are coded in Go

      Data Splits
    

    Train, test, validation labels are included in the dataset as… See the full description on the dataset page: https://huggingface.co/datasets/Nan-Do/code-search-net-go.

  13. A 1km experimental dataset for the Mediterranean terrestrial region of Soil...

    • zenodo.org
    pdf, zip
    Updated Jul 15, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jaap Schellekens; Jaap Schellekens; Tessa Kramer; Michel van Klink; Robin van der Schalie; Yoann Malbeteau; Yoann Malbeteau; Arjan Geers; Arjan Geers; Richard de Jeu; Richard de Jeu; Tessa Kramer; Michel van Klink; Robin van der Schalie (2024). A 1km experimental dataset for the Mediterranean terrestrial region of Soil Moisture, Land Surface Temperature and Vegetation Optical Depth from passive microwave data [Dataset]. http://doi.org/10.5281/zenodo.7244354
    Explore at:
    pdf, zipAvailable download formats
    Dataset updated
    Jul 15, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Jaap Schellekens; Jaap Schellekens; Tessa Kramer; Michel van Klink; Robin van der Schalie; Yoann Malbeteau; Yoann Malbeteau; Arjan Geers; Arjan Geers; Richard de Jeu; Richard de Jeu; Tessa Kramer; Michel van Klink; Robin van der Schalie
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    A 1km experimental dataset for the Mediterranean terrestrial region of Soil Moisture, Land Surface Temperature and Vegetation Optical Depth from passive microwave data.

    Introduction

    This dataset is the Planet Labs PBC (VanderSat B.V.) contribution to the ESA 4DMED hydrology project (https://www.4dmed-hydrology.org/). It includes Soil Moisture, Land Surface Temperature and Vegetation Optical Depth for the 4DMED spatial domain and time period (2015-2021) at 1km pixel size. If you use the data please include the following reference:

    Jaap Schellekens, Tessa Kramer, Michel van Klink, Robin van der Schalie, Yoann Malbeteau, Arjan Geers, Richard de Jeu. (2022) A 1km experimental dataset for the Mediterranean terrestrial region of Soil Moisture, Land Surface Temperature and Vegetation Optical Depth from passive microwave data. DOI: 10.5281/zenodo.7244354. Planet Labs PBC/VanderSat B.V., ESA Contract No. 4000136272/21/I-EF

    Variables and files

    The dataset consists of the following files and products for the 4DMED domain. Detailed information about the products can also be found at docs.vandersat.com:

    • planet-teff-4dmed-V4.0.zip - LST (TEFF) ascending (daytime) and descending (nighttime)
      • TEFF-AMSR2-ASC_V4.0_1000
        • Land surface temperature daytime (13:30 solar time) at 1 km
      • TEFF-AMSR2-DESC_V4.0_1000
        • Land surface temperature nighttime (01:30 solar time) at 1 km
    • planet-teff-qf-4dmed-V4.0.zip - LST (TEFF) quality flags
    • planet-vod-4dmed-V4.0.zip - vegetation optical depth C and X band
      • VOD-AMSR2-C1-DESC_V4.0_1000
        • C1 band Vegetation Optical Depth (nighttime, 01:30 solar time) at 1km
      • VOD-AMSR2-X-DESC_V4.0_1000
        • X band Vegetation Optical Depth (nighttime, 01:30 solar time) at 1km
    • planet-sm-4dmed-V4.0.zip - All soil moisture products (C1, X and L-band)
      • SM-AMSR2-C1-DESC_V4.0_1000
        • C1 band soil moisture (nighttime, 01:30 solar time) at 1km
      • SM-AMSR2-X-DESC_V4.0_1000
        • X band soil moisture (nighttime, 01:30 solar time) at 1km
      • SM-SMAP-L-DESC_V4.0_1000
        • L band soil moisture (06:00 solar time) at 1km
    • planet-sm-qf-4dmed-V4.0.zip - Soil moisture quality maps see https://docs.vandersat.com/data_products/soil_water_content/data_flags.html and https://docs.vandersat.com/data_products/soil_water_content/data_flags.html#decoding-a-flag-file-using-python
      • QF-SM-AMSR2-C1-DESC_V4.0_1000
        • C1 band soil moisture (nighttime, 01:30 solar time) at 1km
      • QF-SM-AMSR2-X-DESC_V4.0_1000
        • X band soil moisture (nighttime, 01:30 solar time) at 1km
      • QF-SM-SMAP-L-DESC_V4.0_1000
        • L band soil moisture quality flags (06:00 solar time) at 1km
    • planet-sm-cor-4dmed-V4.0.zip - Yearly correlation maps of soil moisture derived from the difference microwave bands. To be used as an extra quality indicator (for example undetected RFI) or for uncertainty estimation
      • SM-CORR-C1-X-DESC_V4.0_1000 - yearly C1 vs X band pearson's correlation maps
      • SM-CORR-L-C1-DESC_V4.0_1000 - yearly L vs C1 band pearson's correlation maps
      • SM-CORR-L-X-DESC_V4.0_1000 - yearly L vs X band pearson's correlation maps
    • planet-aux-flags-4dmed-V4.0 - Extra flags for frozen soil and bare soil. Determined at 0.25 degree and interpolated to the 4dmed grid
      • QF-SNOWFROZEN-AMSR2-ASC_1000::RD - Frozen soil determined from dayttime data
      • QF-SNOWFROZEN-AMSR2-DESC_1000::RD - Frozen soil determined from nighttime data
      • QF-BARESOIL-AMSR2-DESC_1000::RD - Bare soil determined from nighttime data
      • QF-BARESOIL-AMSR2-ASC_1000::RD - Bare soil determined from daytime data

    All files are archived into one zip file per product group. Each individual netcdf file in the zip file consists of one observation for the whole domain. If you need you can combine the files into one file using the cdo software https://code.mpimet.mpg.de/projects/cdo (e.g. cdo -f nc4c mergetime *.nc outfile.nc).

    License

    The data for 4DMED is released under the Creative Commons license: CC BY-NC-SA 4.0 (https://creativecommons.org/licenses/by-nc-sa/4.0/)

    • Contains modified Copernicus Sentinel data 2015-2021
    • Contains modified JAXA GCOM-W1/AMSR2 data 2015-2021
    • Contains modified SMAP L1B Radiometer data: Piepmeier, J. R., P. Mohammed, J. Peng, E. J. Kim, G. De Amici, J. Chaubell, and C. Ruf. 2020. SMAP L1B Radiometer Half-Orbit Time-Ordered Brightness Temperatures, Version 4,5. Boulder, Colorado USA. NASA National Snow and Ice Data Center Distributed Active Archive Center. doi: https://doi.org/10.5067/ZHHBN1KQLI20

    Contact

    Jaap Schellekens: jaap@planet.com

    Further information

    More information on the data and the flags can be found at https://docs.vandersat.com and https://www.4dmed-hydrology.org

    Background publications

    R.A.M. De Jeu, A.H.A. De Nijs, M.H.W. Van Klink (2016) Method and system for improving the resolution of sensor data, US10643098B2,EP3469516B1, WO2017216186A1

    De Jeu, R. A., Holmes, T. R., Parinussa, R. M., & Owe, M. (2014). A spatially coherent global soil moisture product with improved temporal resolution. Journal of hydrology, 516, 284-296.

    Moesinger, L., Dorigo, W., de Jeu, R., van der Schalie, R., Scanlon, T., Teubner, I. and Forkel, M., 2020. The global long-term microwave vegetation optical depth climate archive (VODCA). Earth System Science Data, 12(1), pp.177-196.

    Schmidt, L., Forkel, M., Zotta, R.-M., Scherrer, S., Dorigo, W. A., Kuhn-Régnier, A., van der Schalie, R., and Yebra, M.: Assessing the sensitivity of multi-frequency passive microwave vegetation optical depth to vegetation properties, Biogeosciences Discuss. [preprint], https://doi.org/10.5194/bg-2022-85, in review, 2022

    Van der Schalie, R., de Jeu, R.A.M., Kerr, Y.H., Wigneron, J.P., Rodríguez-Fernández, N.J., Al- Yaari, A., Parinussa, R.M., Mecklenburg, S. and Drusch, M. (2017), The merging of radiative transfer based surface soil moisture data from SMOS and AMSR-E, Remote Sensing of Environment, 189, pp.180-193.

    van der Vliet, M., van der Schalie, R., Rodriguez-Fernandez, N., Colliander, A., de Jeu, R., Preimesberger, W., Scanlon, T., Dorigo, W., 2020. Reconciling Flagging Strategies for Multi-Sensor Satellite Soil Moisture Climate Data Records. Remote Sensing 12, 3439. https://doi.org/10.3390/rs12203439

  14. o

    Data and Code for: Improving Willingness-to-Pay Elicitation by Including a...

    • openicpsr.org
    Updated Jan 18, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rebecca Dizon-Ross; Seema Jayachandran (2022). Data and Code for: Improving Willingness-to-Pay Elicitation by Including a Benchmark Good [Dataset]. http://doi.org/10.3886/E159881V1
    Explore at:
    Dataset updated
    Jan 18, 2022
    Dataset provided by
    American Economic Association
    Authors
    Rebecca Dizon-Ross; Seema Jayachandran
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Abstract: We propose and validate a simple way to augment the standard Becker-DeGroot-Marschak method that researchers use to elicit willingness to pay (WTP) for a good. The augmentation is to measure WTP for another good (“benchmark good''), one unrelated to both the good the researcher is interested in and the independent variables of interest, and to use WTP for the benchmark good as a control variable in analyses. We illustrate the method and how it can eliminate noise in measured WTP using data collected in Uganda.

  15. Code for [Urban food delivery services as extreme-heat adaptation]

    • zenodo.org
    bin, zip
    Updated Oct 22, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yunke Zhang; Yunke Zhang (2024). Code for [Urban food delivery services as extreme-heat adaptation] [Dataset]. http://doi.org/10.5281/zenodo.13926056
    Explore at:
    bin, zipAvailable download formats
    Dataset updated
    Oct 22, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Yunke Zhang; Yunke Zhang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This repository contains the code to reproduce all figures in our paper "Urban food delivery services as extreme-heat adaptation".

  16. Federal Agency Bureau Codes

    • catalog.data.gov
    • datasets.ai
    Updated Oct 19, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    GSA (2022). Federal Agency Bureau Codes [Dataset]. https://catalog.data.gov/dataset/federal-agency-bureau-codes
    Explore at:
    Dataset updated
    Oct 19, 2022
    Dataset provided by
    General Services Administrationhttp://www.gsa.gov/
    Description

    A list of 4-digit GSA federal agency bureau codes used to identify federal agencies. The primary source is the GSA published list.

  17. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
An Atlas Based on the 'COADS' Data Set: Fields of Mean Wind, Cloudiness and Humidity at the Surface of the Global Ocean [Dataset]. https://rda-web-prod.ucar.edu/#!lfd?nb=y&b=topic&v=Atmosphere

An Atlas Based on the 'COADS' Data Set: Fields of Mean Wind, Cloudiness and Humidity at the Surface of the Global Ocean

Explore at:
Description

Monthly global grids of data, derived fluxes, and anomalies were prepared from the COADS data.

Search
Clear search
Close search
Google apps
Main menu