100+ datasets found
  1. iNaturalist Research-grade Observations

    • gbif.org
    • smng.net
    • +5more
    Updated Sep 23, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    iNaturalist contributors; iNaturalist contributors (2025). iNaturalist Research-grade Observations [Dataset]. http://doi.org/10.15468/ab3s5x
    Explore at:
    Dataset updated
    Sep 23, 2025
    Dataset provided by
    iNaturalisthttp://inaturalist.org/
    Global Biodiversity Information Facilityhttps://www.gbif.org/
    Authors
    iNaturalist contributors; iNaturalist contributors
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Time period covered
    Sep 17, 1768 - Sep 16, 2025
    Area covered
    Description

    Observations from iNaturalist.org, an online social network of people sharing biodiversity information to help each other learn about nature.

    Observations included in this archive met the following requirements:

    * Published under one of the following licenses or waivers: 1) https://creativecommons.org/publicdomain/zero/1.0/, 2) https://creativecommons.org/licenses/by/4.0/, 3) https://creativecommons.org/licenses/by-nc/4.0/

    * Achieved one of following iNaturalist quality grades: Research

    * Created on or before 2025-09-16 15:00:20 -0700

    You can view observations meeting these requirements at https://www.inaturalist.org/observations?created_d2=2025-09-16+15%3A00%3A20+-0700&d1=1600-01-01&license=CC0%2CCC-BY%2CCC-BY-NC&quality_grade=research

  2. A

    Gridded Monthly Time-Mean Observation minus Analysis (oma) Values 0.5 x...

    • data.amerigeoss.org
    html, pdf, png
    Updated Dec 13, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    United States (2019). Gridded Monthly Time-Mean Observation minus Analysis (oma) Values 0.5 x 0.667 degree V001 (MA_HIRS2_NOAA07_OMA) at GES DISC [Dataset]. https://data.amerigeoss.org/dataset/gridded-monthly-time-mean-observation-minus-analysis-oma-values-0-5-x-0-667-degree-v001-ma-cfc4
    Explore at:
    html, pdf, pngAvailable download formats
    Dataset updated
    Dec 13, 2019
    Dataset provided by
    United States
    Description

    The differences between the observations and the forecast background used for the analysis (the innovations or O-F for short) and those between the observations and the final analysis (O-A) are by-products of any assimilation system and provide information about the quality of the analysis and the impact of the observations. Innovations have been traditionally used to diagnose observation, background and analysis errors at observation locations (Hollingsworth and Lonnberg 1989; Dee and da Silva 1999). At the most simplistic level, innovation variances can be used as an upper bound on background errors, which are, in turn, an upper bound on the analysis errors. With more processing (and the assumption of optimality), the O-F and O-A statistics can be used to estimate observation, background and analysis errors (Desroziers et al. 2005). They can also be used to estimate the systematic and random errors in the analysis fields. Unfortunately, such data are usually not readily available with reanalysis products. With MERRA, however, a gridded version of the observations and innovations used in the assimilation process is being made available. The dataset allows the user to conveniently perform investigations related to the observing system and to calculate error estimates. Da Silva (2011) provides an overview and analysis of these datasets for MERRA.

        The innovations may be thought of as the correction to the background required by a given instrument, while the analysis increment (A-F) is the consolidated correction once all instruments, observation errors, and background errors have been taken into consideration. The extent to which the O-F statistics for the various instruments are similar to the A-F statistics reflects the degree of homogeneity of the observing system as a whole. Using the joint probability density function (PDF) of innovations and analysis increments, da Silva (2011) introduces the concepts of the effective gain (by analogy with the Kalman gain) and the contextual bias. In brief, the effective gain for an observation is a measure of how much the assimilation system has drawn to that type of observation, while the contextual bias is a measure of the degree of agreement between a given observation type and all other observations assimilated.
    
        With MERRAs gridded observation and innovation data sets, a wealth of information is available for examination of the quality of the analyses and how the different observations impact the analyses and interact with each other. Such examinations can be conducted regionally or globally and should provide useful information for the next generation of reanalyses.
    
  3. e

    Homogeneous Means in the UBV System - Dataset - B2FIND

    • b2find.eudat.eu
    Updated Feb 1, 2002
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2002). Homogeneous Means in the UBV System - Dataset - B2FIND [Dataset]. https://b2find.eudat.eu/dataset/62da8e3c-0634-580e-8203-3f506f09582f
    Explore at:
    Dataset updated
    Feb 1, 2002
    Description

    The present catalog supersedes an earlier edition of Nicolet (1978). It is a collection of weighted mean photoelectric values (V, B-V, U-B) for stars measured in the UBV system. The mean values were computed by combining all individual measurements compiled in the catalog of Mermilliod (1987), except those that were clearly found to be erroneous for some reason or another. Some newer observations compiled since 1987 are also included in the means. The procedure for computing the homogeneous means involved the calculation of normal averages weighted by the number of observations in each list (unity when not published). New weights are assigned based on the deviation of each value from the previous mean, then a new weighted mean is computed. This technique is not as rigorous as that used by Nicolet (comparison of each list with the standard system master list), but the latter cannot often be realized effectively in practice, since many lists do not contain enough stars in common with a standard list. Also, there are now so many references (more than 1500) that it is not feasible to analyze each publication with respect to a standard list. This edition of the catalog contains 92964 stars measured since the introduction of the UBV system in 1953. The data included are star identification in the Geneva coded numbering system, double and variable codes, UBV data and their standard deviations, and number of observations. A second file contains the definition of the coded numbering system. The catalog was prepared at the Institut d'Astronomie de l'Universite de Lausanne in Geneva. Cone search capability for table II/168/ubvmeans (Catalog Data)

  4. W

    Gridded Monthly Time-Mean Observation minus Analysis (oma) Values 0.5 x...

    • cloud.csiss.gmu.edu
    html, pdf, png
    Updated Dec 13, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    United States (2019). Gridded Monthly Time-Mean Observation minus Analysis (oma) Values 0.5 x 0.667 degree V001 (MA_MSU_NOAA11_OMA) at GES DISC [Dataset]. https://cloud.csiss.gmu.edu/uddi/dataset/gridded-monthly-time-mean-observation-minus-analysis-oma-values-0-5-x-0-667-degree-v001-ma-ca71
    Explore at:
    html, png, pdfAvailable download formats
    Dataset updated
    Dec 13, 2019
    Dataset provided by
    United States
    Description

    The differences between the observations and the forecast background used for the analysis (the innovations or O-F for short) and those between the observations and the final analysis (O-A) are by-products of any assimilation system and provide information about the quality of the analysis and the impact of the observations. Innovations have been traditionally used to diagnose observation, background and analysis errors at observation locations (Hollingsworth and Lonnberg 1989; Dee and da Silva 1999). At the most simplistic level, innovation variances can be used as an upper bound on background errors, which are, in turn, an upper bound on the analysis errors. With more processing (and the assumption of optimality), the O-F and O-A statistics can be used to estimate observation, background and analysis errors (Desroziers et al. 2005). They can also be used to estimate the systematic and random errors in the analysis fields. Unfortunately, such data are usually not readily available with reanalysis products. With MERRA, however, a gridded version of the observations and innovations used in the assimilation process is being made available. The dataset allows the user to conveniently perform investigations related to the observing system and to calculate error estimates. Da Silva (2011) provides an overview and analysis of these datasets for MERRA.

        The innovations may be thought of as the correction to the background required by a given instrument, while the analysis increment (A-F) is the consolidated correction once all instruments, observation errors, and background errors have been taken into consideration. The extent to which the O-F statistics for the various instruments are similar to the A-F statistics reflects the degree of homogeneity of the observing system as a whole. Using the joint probability density function (PDF) of innovations and analysis increments, da Silva (2011) introduces the concepts of the effective gain (by analogy with the Kalman gain) and the contextual bias. In brief, the effective gain for an observation is a measure of how much the assimilation system has drawn to that type of observation, while the contextual bias is a measure of the degree of agreement between a given observation type and all other observations assimilated.
    
        With MERRAs gridded observation and innovation data sets, a wealth of information is available for examination of the quality of the analyses and how the different observations impact the analyses and interact with each other. Such examinations can be conducted regionally or globally and should provide useful information for the next generation of reanalyses.
    
  5. o

    atp7d

    • openml.org
    Updated Mar 14, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2019). atp7d [Dataset]. https://www.openml.org/d/41476
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Mar 14, 2019
    Description

    Multivariate regression data set from: https://link.springer.com/article/10.1007%2Fs10994-016-5546-z : The Airline Ticket Price dataset concerns the prediction of airline ticket prices. The rows are a sequence of time-ordered observations over several days. Each sample in this dataset represents a set of observations from a specific observation date and departure date pair. The input variables for each sample are values that may be useful for prediction of the airline ticket prices for a specific departure date. The target variables in these datasets are the next day (ATP1D) price or minimum price observed over the next 7 days (ATP7D) for 6 target flight preferences: (1) any airline with any number of stops, (2) any airline non-stop only, (3) Delta Airlines, (4) Continental Airlines, (5) Airtrain Airlines, and (6) United Airlines. The input variables include the following types: the number of days between the observation date and the departure date (1 feature), the boolean variables for day-of-the-week of the observation date (7 features), the complete enumeration of the following 4 values: (1) the minimum price, mean price, and number of quotes from (2) all airlines and from each airline quoting more than 50 % of the observation days (3) for non-stop, one-stop, and two-stop flights, (4) for the current day, previous day, and two days previous. The result is a feature set of 411 variables. For specific details on how these datasets are constructed please consult Groves and Gini (2015). The nature of these datasets is heterogeneous with a mixture of several types of variables including boolean variables, prices, and counts.

  6. Gridded Monthly Time-Mean Observation (obs) Values 0.5 x 0.667 degree V001...

    • data.nasa.gov
    • cmr.earthdata.nasa.gov
    application/rdfxml +5
    Updated Dec 13, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2019). Gridded Monthly Time-Mean Observation (obs) Values 0.5 x 0.667 degree V001 (MA_SSMI_DMSP10_OBS) at GES DISC [Dataset]. https://data.nasa.gov/Earth-Science/Gridded-Monthly-Time-Mean-Observation-obs-Values-0/km4e-em8y
    Explore at:
    application/rdfxml, csv, application/rssxml, xml, tsv, jsonAvailable download formats
    Dataset updated
    Dec 13, 2019
    Description

    The differences between the observations and the forecast background used for the analysis (the innovations or O-F for short) and those between the observations and the final analysis (O-A) are by-products of any assimilation system and provide information about the quality of the analysis and the impact of the observations. Innovations have been traditionally used to diagnose observation, background and analysis errors at observation locations (Hollingsworth and Lonnberg 1989; Dee and da Silva 1999). At the most simplistic level, innovation variances can be used as an upper bound on background errors, which are, in turn, an upper bound on the analysis errors. With more processing (and the assumption of optimality), the O-F and O-A statistics can be used to estimate observation, background and analysis errors (Desroziers et al. 2005). They can also be used to estimate the systematic and random errors in the analysis fields. Unfortunately, such data are usually not readily available with reanalysis products. With MERRA, however, a gridded version of the observations and innovations used in the assimilation process is being made available. The dataset allows the user to conveniently perform investigations related to the observing system and to calculate error estimates. Da Silva (2011) provides an overview and analysis of these datasets for MERRA.

        The innovations may be thought of as the correction to the background required by a given instrument, while the analysis increment (A-F) is the consolidated correction once all instruments, observation errors, and background errors have been taken into consideration. The extent to which the O-F statistics for the various instruments are similar to the A-F statistics reflects the degree of homogeneity of the observing system as a whole. Using the joint probability density function (PDF) of innovations and analysis increments, da Silva (2011) introduces the concepts of the effective gain (by analogy with the Kalman gain) and the contextual bias. In brief, the effective gain for an observation is a measure of how much the assimilation system has drawn to that type of observation, while the contextual bias is a measure of the degree of agreement between a given observation type and all other observations assimilated.
    
        With MERRAs gridded observation and innovation data sets, a wealth of information is available for examination of the quality of the analyses and how the different observations impact the analyses and interact with each other. Such examinations can be conducted regionally or globally and should provide useful information for the next generation of reanalyses.
    
  7. u

    International Comprehensive Ocean-Atmosphere Data Set (ICOADS) Release 3,...

    • data.ucar.edu
    • rda.ucar.edu
    • +5more
    binary
    Updated Aug 4, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Center for Ocean-Atmospheric Prediction Studies, Florida State University; Cooperative Institute for Research in Environmental Sciences, University of Colorado; Department of Atmospheric Science, University of Washington; Deutscher Wetterdienst (German Meteorological Service), Germany; Met Office, Ministry of Defence, United Kingdom; National Centers for Environmental Information, NESDIS, NOAA, U.S. Department of Commerce; National Oceanography Centre, University of Southampton; Physical Sciences Laboratory, Earth System Research Laboratory, OAR, NOAA, U.S. Department of Commerce; Research Data Archive, Computational and Information Systems Laboratory, National Center for Atmospheric Research, University Corporation for Atmospheric Research (2024). International Comprehensive Ocean-Atmosphere Data Set (ICOADS) Release 3, Monthly Summaries [Dataset]. http://doi.org/10.5065/D6V40SFD
    Explore at:
    binaryAvailable download formats
    Dataset updated
    Aug 4, 2024
    Dataset provided by
    Research Data Archive at the National Center for Atmospheric Research, Computational and Information Systems Laboratory
    Authors
    Center for Ocean-Atmospheric Prediction Studies, Florida State University; Cooperative Institute for Research in Environmental Sciences, University of Colorado; Department of Atmospheric Science, University of Washington; Deutscher Wetterdienst (German Meteorological Service), Germany; Met Office, Ministry of Defence, United Kingdom; National Centers for Environmental Information, NESDIS, NOAA, U.S. Department of Commerce; National Oceanography Centre, University of Southampton; Physical Sciences Laboratory, Earth System Research Laboratory, OAR, NOAA, U.S. Department of Commerce; Research Data Archive, Computational and Information Systems Laboratory, National Center for Atmospheric Research, University Corporation for Atmospheric Research
    Time period covered
    Jan 1, 1800 - Nov 30, 2021
    Description

    The International Comprehensive Ocean-Atmosphere Data Set (ICOADS) is a global ocean marine meteorological and surface ocean dataset. It is formed by merging many national and international data sources that contain measurements and visual observations from ships (merchant, navy, research), moored and drifting buoys, coastal stations, and other marine platforms. The coverage is global and sampling density varies depending on date and geographic position relative to shipping routes and ocean observing systems. The monthly summary time series are available at 2-degree (since 1800) and 1-degree (since 1960) spatial resolutions. Observations (22 observed and computed parameters) taken in each month for a give spatial latitude by longitude box are statistically summarized (e.g. mean, median, number of observations, etc) and NOT interpolated or analyzed to fill data voids.

  8. n

    Effect of data source on estimates of regional bird richness in northeastern...

    • data.niaid.nih.gov
    • datadryad.org
    zip
    Updated May 4, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Roi Ankori-Karlinsky; Ronen Kadmon; Michael Kalyuzhny; Katherine F. Barnes; Andrew M. Wilson; Curtis Flather; Rosalind Renfrew; Joan Walsh; Edna Guk (2021). Effect of data source on estimates of regional bird richness in northeastern United States [Dataset]. http://doi.org/10.5061/dryad.m905qfv0h
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 4, 2021
    Dataset provided by
    University of Vermont
    Hebrew University of Jerusalem
    Gettysburg College
    Massachusetts Audubon Society
    Agricultural Research Service
    University of Michigan
    New York State Department of Environmental Conservation
    Columbia University
    Authors
    Roi Ankori-Karlinsky; Ronen Kadmon; Michael Kalyuzhny; Katherine F. Barnes; Andrew M. Wilson; Curtis Flather; Rosalind Renfrew; Joan Walsh; Edna Guk
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Area covered
    Northeastern United States, United States
    Description

    Standardized data on large-scale and long-term patterns of species richness are critical for understanding the consequences of natural and anthropogenic changes in the environment. The North American Breeding Bird Survey (BBS) is one of the largest and most widely used sources of such data, but so far, little is known about the degree to which BBS data provide accurate estimates of regional richness. Here we test this question by comparing estimates of regional richness based on BBS data with spatially and temporally matched estimates based on state Breeding Bird Atlases (BBA). We expected that estimates based on BBA data would provide a more complete (and therefore, more accurate) representation of regional richness due to their larger number of observation units and higher sampling effort within the observation units. Our results were only partially consistent with these predictions: while estimates of regional richness based on BBA data were higher than those based on BBS data, estimates of local richness (number of species per observation unit) were higher in BBS data. The latter result is attributed to higher land-cover heterogeneity in BBS units and higher effectiveness of bird detection (more species are detected per unit time). Interestingly, estimates of regional richness based on BBA blocks were higher than those based on BBS data even when differences in the number of observation units were controlled for. Our analysis indicates that this difference was due to higher compositional turnover between BBA units, probably due to larger differences in habitat conditions between BBA units and a larger number of geographically restricted species. Our overall results indicate that estimates of regional richness based on BBS data suffer from incomplete detection of a large number of rare species, and that corrections of these estimates based on standard extrapolation techniques are not sufficient to remove this bias. Future applications of BBS data in ecology and conservation, and in particular, applications in which the representation of rare species is important (e.g., those focusing on biodiversity conservation), should be aware of this bias, and should integrate BBA data whenever possible.

    Methods Overview

    This is a compilation of second-generation breeding bird atlas data and corresponding breeding bird survey data. This contains presence-absence breeding bird observations in 5 U.S. states: MA, MI, NY, PA, VT, sampling effort per sampling unit, geographic location of sampling units, and environmental variables per sampling unit: elevation and elevation range from (from SRTM), mean annual precipitation & mean summer temperature (from PRISM), and NLCD 2006 land-use data.

    Each row contains all observations per sampling unit, with additional tables containing information on sampling effort impact on richness, a rareness table of species per dataset, and two summary tables for both bird diversity and environmental variables.

    The methods for compilation are contained in the supplementary information of the manuscript but also here:

    Bird data

    For BBA data, shapefiles for blocks and the data on species presences and sampling effort in blocks were received from the atlas coordinators. For BBS data, shapefiles for routes and raw species data were obtained from the Patuxent Wildlife Research Center (https://databasin.org/datasets/02fe0ebbb1b04111b0ba1579b89b7420 and https://www.pwrc.usgs.gov/BBS/RawData).

    Using ArcGIS Pro© 10.0, species observations were joined to respective BBS and BBA observation units shapefiles using the Join Table tool. For both BBA and BBS, a species was coded as either present (1) or absent (0). Presence in a sampling unit was based on codes 2, 3, or 4 in the original volunteer birding checklist codes (possible breeder, probable breeder, and confirmed breeder, respectively), and absence was based on codes 0 or 1 (not observed and observed but not likely breeding). Spelling inconsistencies of species names between BBA and BBS datasets were fixed. Species that needed spelling fixes included Brewer’s Blackbird, Cooper’s Hawk, Henslow’s Sparrow, Kirtland’s Warbler, LeConte’s Sparrow, Lincoln’s Sparrow, Swainson’s Thrush, Wilson’s Snipe, and Wilson’s Warbler. In addition, naming conventions were matched between BBS and BBA data. The Alder and Willow Flycatchers were lumped into Traill’s Flycatcher and regional races were lumped into a single species column: Dark-eyed Junco regional types were lumped together into one Dark-eyed Junco, Yellow-shafted Flicker was lumped into Northern Flicker, Saltmarsh Sparrow and the Saltmarsh Sharp-tailed Sparrow were lumped into Saltmarsh Sparrow, and the Yellow-rumped Myrtle Warbler was lumped into Myrtle Warbler (currently named Yellow-rumped Warbler). Three hybrid species were removed: Brewster's and Lawrence's Warblers and the Mallard x Black Duck hybrid. Established “exotic” species were included in the analysis since we were concerned only with detection of richness and not of specific species.

    The resultant species tables with sampling effort were pivoted horizontally so that every row was a sampling unit and each species observation was a column. This was done for each state using R version 3.6.2 (R© 2019, The R Foundation for Statistical Computing Platform) and all state tables were merged to yield one BBA and one BBS dataset. Following the joining of environmental variables to these datasets (see below), BBS and BBA data were joined using rbind.data.frame in R© to yield a final dataset with all species observations and environmental variables for each observation unit.

    Environmental data

    Using ArcGIS Pro© 10.0, all environmental raster layers, BBA and BBS shapefiles, and the species observations were integrated in a common coordinate system (North_America Equidistant_Conic) using the Project tool. For BBS routes, 400m buffers were drawn around each route using the Buffer tool. The observation unit shapefiles for all states were merged (separately for BBA blocks and BBS routes and 400m buffers) using the Merge tool to create a study-wide shapefile for each data source. Whether or not a BBA block was adjacent to a BBS route was determined using the Intersect tool based on a radius of 30m around the route buffer (to fit the NLCD map resolution). Area and length of the BBS route inside the proximate BBA block were also calculated. Mean values for annual precipitation and summer temperature, and mean and range for elevation, were extracted for every BBA block and 400m buffer BBS route using Zonal Statistics as Table tool. The area of each land-cover type in each observation unit (BBA block and BBS buffer) was calculated from the NLCD layer using the Zonal Histogram tool.

  9. o

    University SET data, with faculty and courses characteristics

    • openicpsr.org
    Updated Sep 12, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Under blind review in refereed journal (2021). University SET data, with faculty and courses characteristics [Dataset]. http://doi.org/10.3886/E149801V1
    Explore at:
    Dataset updated
    Sep 12, 2021
    Authors
    Under blind review in refereed journal
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This paper explores a unique dataset of all the SET ratings provided by students of one university in Poland at the end of the winter semester of the 2020/2021 academic year. The SET questionnaire used by this university is presented in Appendix 1. The dataset is unique for several reasons. It covers all SET surveys filled by students in all fields and levels of study offered by the university. In the period analysed, the university was entirely in the online regime amid the Covid-19 pandemic. While the expected learning outcomes formally have not been changed, the online mode of study could have affected the grading policy and could have implications for some of the studied SET biases. This Covid-19 effect is captured by econometric models and discussed in the paper. The average SET scores were matched with the characteristics of the teacher for degree, seniority, gender, and SET scores in the past six semesters; the course characteristics for time of day, day of the week, course type, course breadth, class duration, and class size; the attributes of the SET survey responses as the percentage of students providing SET feedback; and the grades of the course for the mean, standard deviation, and percentage failed. Data on course grades are also available for the previous six semesters. This rich dataset allows many of the biases reported in the literature to be tested for and new hypotheses to be formulated, as presented in the introduction section. The unit of observation or the single row in the data set is identified by three parameters: teacher unique id (j), course unique id (k) and the question number in the SET questionnaire (n ϵ {1, 2, 3, 4, 5, 6, 7, 8, 9} ). It means that for each pair (j,k), we have nine rows, one for each SET survey question, or sometimes less when students did not answer one of the SET questions at all. For example, the dependent variable SET_score_avg(j,k,n) for the triplet (j=Calculus, k=John Smith, n=2) is calculated as the average of all Likert-scale answers to question nr 2 in the SET survey distributed to all students that took the Calculus course taught by John Smith. The data set has 8,015 such observations or rows. The full list of variables or columns in the data set included in the analysis is presented in the attached filesection. Their description refers to the triplet (teacher id = j, course id = k, question number = n). When the last value of the triplet (n) is dropped, it means that the variable takes the same values for all n ϵ {1, 2, 3, 4, 5, 6, 7, 8, 9}.Two attachments:- word file with variables description- Rdata file with the data set (for R language).Appendix 1. Appendix 1. The SET questionnaire was used for this paper. Evaluation survey of the teaching staff of [university name] Please, complete the following evaluation form, which aims to assess the lecturer’s performance. Only one answer should be indicated for each question. The answers are coded in the following way: 5- I strongly agree; 4- I agree; 3- Neutral; 2- I don’t agree; 1- I strongly don’t agree. Questions 1 2 3 4 5 I learnt a lot during the course. ○ ○ ○ ○ ○ I think that the knowledge acquired during the course is very useful. ○ ○ ○ ○ ○ The professor used activities to make the class more engaging. ○ ○ ○ ○ ○ If it was possible, I would enroll for the course conducted by this lecturer again. ○ ○ ○ ○ ○ The classes started on time. ○ ○ ○ ○ ○ The lecturer always used time efficiently. ○ ○ ○ ○ ○ The lecturer delivered the class content in an understandable and efficient way. ○ ○ ○ ○ ○ The lecturer was available when we had doubts. ○ ○ ○ ○ ○ The lecturer treated all students equally regardless of their race, background and ethnicity. ○ ○

  10. Z

    Monthly aggregated Water Vapor MODIS MCD19A2 (1 km): Long-term data...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jul 11, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Leandro Parente (2024). Monthly aggregated Water Vapor MODIS MCD19A2 (1 km): Long-term data (2000-2022) [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_8192543
    Explore at:
    Dataset updated
    Jul 11, 2024
    Dataset provided by
    Rolf Simoes
    Tomislav Hengl
    Leandro Parente
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    This data is part of the Monthly aggregated Water Vapor MODIS MCD19A2 (1 km) dataset. Check the related identifiers section on the Zenodo side panel to access other parts of the dataset. General Description The monthly aggregated water vapor dataset is derived from MCD19A2 v061. The Water Vapor data measures the column above ground retrieved from MODIS near-IR bands at 0.94μm. The dataset time spans from 2000 to 2022 and provides data that covers the entire globe. The dataset can be used in many applications like water cycle modeling, vegetation mapping, and soil mapping. This dataset includes:

    Monthly time-series:Derived from MCD19A2 v061, this data provides a monthly aggregated mean and standard deviation of daily water vapor time-series data from 2000 to 2022. Only positive non-cloudy pixels were considered valid observations to derive the mean and the standard deviation. The remaining no-data values were filled using the TMWM algorithm. This dataset also includes smoothed mean and standard deviation values using the Whittaker method. The quality assessment layers and the number of valid observations for each month can provide an indication of the reliability of the monthly mean and standard deviation values. Yearly time-series:Derived from monthly time-series, this data provides a yearly time-series aggregated statistics of the monthly time-series data. Long-term data (2000-2022):Derived from monthly time-series, this data provides long-term aggregated statistics for the whole series of monthly observations. Data Details

    Time period: 2000–2022 Type of data: Water vapor column above the ground (0.001cm) How the data was collected or derived: Derived from MCD19A2 v061 using Google Earth Engine. Cloudy pixels were removed and only positive values of water vapor were considered to compute the statistics. The time-series gap-filling and time-series smoothing were computed using the Scikit-map Python package. Statistical methods used: Four statistics were derived: standard deviation, percentiles 25, 50, and 75. Limitations or exclusions in the data: The dataset does not include data for Antarctica. Coordinate reference system: EPSG:4326 Bounding box (Xmin, Ymin, Xmax, Ymax): (-180.00000, -62.00081, 179.99994, 87.37000) Spatial resolution: 1/120 d.d. = 0.008333333 (1km) Image size: 43,200 x 17,924 File format: Cloud Optimized Geotiff (COG) format. Support If you discover a bug, artifact, or inconsistency, or if you have a question please use some of the following channels:

    Technical issues and questions about the code: GitLab Issues General questions and comments: LandGIS Forum Name convention To ensure consistency and ease of use across and within the projects, we follow the standard Open-Earth-Monitor file-naming convention. The convention works with 10 fields that describes important properties of the data. In this way users can search files, prepare data analysis etc, without needing to open files. The fields are:

    generic variable name: wv = Water vapor variable procedure combination: mcd19a2v061.seasconv = MCD19A2 v061 with gap-filling algorithm Position in the probability distribution / variable type: m = mean | sd = standard deviation | n = number of observations | qa = quality assessment Spatial support: 1km Depth reference: s = surface Time reference begin time: 20000101 = 2000-01-01 Time reference end time: 20221231 = 2022-12-31 Bounding box: go = global (without Antarctica) EPSG code: epsg.4326 = EPSG:4326 Version code: v20230619 = 2023-06-19 (creation date)

  11. e

    Baltic and North Sea Climatology hydrographic part (Version 1.0) - Dataset -...

    • b2find.eudat.eu
    Updated Jul 20, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Baltic and North Sea Climatology hydrographic part (Version 1.0) - Dataset - B2FIND [Dataset]. https://b2find.eudat.eu/dataset/3eea1306-96ee-5102-9d09-392e38efe482
    Explore at:
    Dataset updated
    Jul 20, 2024
    Area covered
    North Sea
    Description

    This is the first version (v1.0) of the hydrographic part of the "Baltic and North Sea Climatology (BNSC)". The parameters provided here are water temperature and salinity on 105 depth levels. The data product comprises the time period from 1873-2015 and is based on more than one million observational profiles, which were obtained from several different data sources in the region of the Baltic, the North Sea and adjacent areas of the North Atlantic Ocean (15°W-30°E, 47°N-66°N). Intersection of observational data from different data sources is avoided and the in situ data were objected to an elaborate automatic quality control to identify erroneous observations that would bias the data product. Additionally, a correction of the temporal sampling error was applied to minimize the impact of the temporal distribution of the observations on the created temporal mean fields. The data product consists of gridded mean fields of water temperature and salinity. The spatial resolution is 0.25° in meridional and zonal direction. The depth levels are irregularly distributed: for the depth interval from 0 to 50m the distance between the single depth levels is 5m. Below 50m, the distance increases progressively by 1m to the last depth level of 4985m. The dimensions of the data product are 180*76*105 (longitude, latitude, depth). The BNSC climatology consists, on the one hand, of time series of monthly and annual mean values of the hydrographic parameters as fields of box averages. Grid boxes that show no observations are left empty. Based on these time series, decadal monthly mean fields are created for the decades 1956-1965, 1966-1975, 1976-1985, 1986-1995, 1996-2005, 2006-2015 as another part of the data product. Again, gaps remain in observational data-void regions. The third part of the data product results from above mentioned decadal mean fields: horizontally interpolated fields by application of the method of objective analysis. Consequently, this subset does not contain gaps. Available parameters: box averages: monthly and annual mean, resp. standard deviation, number of observations decadal box averages: decadal monthly mean, resp. standard deviation, mean year, standard deviation to mean year, number of years decadal interpolated mean: interpolated monthly mean, absolute median deviation, number of bins, first guess, relative interpolation error, mean year, mean distance The products are publicly available at the ICDC portal ( https://icdc.cen.uni-hamburg.de/1/daten/ocean/bnsc/)

  12. c

    nClimGrid Historical Observations Hot Days

    • cris.climate.gov
    • hub.arcgis.com
    Updated Jun 3, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Climate Resilience (2025). nClimGrid Historical Observations Hot Days [Dataset]. https://cris.climate.gov/datasets/nclimgrid-historical-observations-hot-days
    Explore at:
    Dataset updated
    Jun 3, 2025
    Dataset authored and provided by
    National Climate Resilience
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Description

    The Climate Resilience Information System (CRIS) provides data and tools for developers of climate services. This layer has historical variables in decadal increments from 1950 to 2020 derived from historical observations of air temperature and precipitation. The variables included are:Annual number of days with a maximum temperature greater than or equal to 85°F Annual number of days with a maximum temperature greater than or equal to 86°F Annual number of days with a maximum temperature greater than or equal to 90°F Annual number of days with a maximum temperature greater than or equal to 95°F Annual number of days with a maximum temperature greater than or equal to 100°F Annual number of days with a maximum temperature greater than or equal to 105°F Annual number of days with a maximum temperature greater than or equal to 110°F Annual number of days with a maximum temperature greater than or equal to 115°F This layer uses data from the NOAA Monthly U.S. Climate Gridded Dataset (nClimGrid). Further processing by Esri is explained below.For each variable, there are mean values for the defined respective geography: counties, tribal areas, HUC-8 watersheds. The process for deriving these summaries is available from the CRIS Website’s About the Data. Other climate variables are available from the CRIS Data page. Additional geographies, including Alaska, Hawai’i and Puerto Rico will be made available in the future.GeographiesThis layer provides historic values for three geographies: county, tribal area, and HUC-8 watersheds.County: based on the U.S. Census TIGER/Line 2022 distribution. Tribal areas: based on the U.S. Census American Indian/Alaska Native/Native Hawaiian Area dataset 2022 distribution. This dataset includes federal- and state-recognized statistical areas.HUC-8 watershed: based on the USGS Washed Boundary Dataset, part of the National Hydrography Database Plus High Resolution. Time RangesHistoric climate threshold values (e.g. Days Over 90°F) were calculated for each year from 1950 to 2020. To ensure the layer displays time correctly, under 'Map properties' set Time zone to 'Universal Coordinated Time (UTC)' and under 'Time slider options' set Time intervals to '1 Decade'.Data CitationVose, Russell S., Applequist, Scott, Squires, Mike, Durre, Imke, Menne, Matthew J., Williams, Claude N. Jr., Fenimore, Chris, Gleason, Karin, and Arndt, Derek (2014): NOAA Monthly U.S. Climate Gridded Dataset (nClimGrid), Version 1. NOAA National Centers for Environmental Information. https://doi.org/10.7289/V5SX6B56.Data ExportExporting this data into shapefiles, geodatabases, GeoJSON, etc is enabled.

  13. Stratospheric Water and OzOne Satellite Homogenized (SWOOSH) data set

    • datasets.ai
    • gimi9.com
    • +4more
    0, 33
    Updated Sep 11, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Oceanic and Atmospheric Administration, Department of Commerce (2024). Stratospheric Water and OzOne Satellite Homogenized (SWOOSH) data set [Dataset]. https://datasets.ai/datasets/stratospheric-water-and-ozone-satellite-homogenized-swoosh-data-set2
    Explore at:
    0, 33Available download formats
    Dataset updated
    Sep 11, 2024
    Dataset provided by
    National Oceanic and Atmospheric Administrationhttp://www.noaa.gov/
    Authors
    National Oceanic and Atmospheric Administration, Department of Commerce
    Description

    The Stratospheric Water and Ozone Satellite Homogenized (SWOOSH) data set is a merged record of stratospheric ozone and water vapor measurements taken by a number of limb sounding and solar occultation satellites over the previous ~30 years. The SWOOSH record spans 1984 to present, and is comprised of data from the SAGE-II/III, UARS HALOE, UARS MLS, and Aura MLS instruments. The measurements are homogenized by applying corrections that are calculated from data taken during time periods of instrument overlap. The primary SWOOSH data product consists of monthly-mean zonal-mean values on a pressure grid. In addition to the primary (zonal-mean) grid, SWOOSH data are also available on 3D (longitude/latitude/pressure), equivalent latitude, and isentropic grids. The gridded data include the mean, standard deviation, number of observations, and mean uncertainty from each instrument. Also included is a merged (multi-instrument) product based on a weighted mean of the available measurements. Because the merged product contains missing data, a merged and filled product is also provided for (e.g., modeling) studies requiring a continuous dataset.

  14. HadUK-Grid Climate Observations by UK river basins, v1.1.0.0 (1836-2021)

    • catalogue.ceda.ac.uk
    • data-search.nerc.ac.uk
    Updated Jan 18, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dan Hollis; Mark McCarthy; Michael Kendon; Tim Legg (2025). HadUK-Grid Climate Observations by UK river basins, v1.1.0.0 (1836-2021) [Dataset]. https://catalogue.ceda.ac.uk/uuid/39b1337028d147d9b572ae352490bed0
    Explore at:
    Dataset updated
    Jan 18, 2025
    Dataset provided by
    Centre for Environmental Data Analysishttp://www.ceda.ac.uk/
    Authors
    Dan Hollis; Mark McCarthy; Michael Kendon; Tim Legg
    License

    Open Government Licence 3.0http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/
    License information was derived automatically

    Time period covered
    Jan 1, 1836 - Dec 31, 2021
    Area covered
    Variables measured
    time, region, area_type, wind_speed, air_temperature, relative_humidity, surface_temperature, duration_of_sunshine, surface_snow_binary_mask, air_pressure_at_sea_level, and 3 more
    Description

    HadUK-Grid is a collection of gridded climate variables derived from the network of UK land surface observations. The data have been interpolated from meteorological station data onto a uniform grid to provide complete and consistent coverage across the UK. These data at 1 km resolution have been averaged across a set of discrete geographies defining UK river basins consistent with data from UKCP18 climate projections. The dataset spans the period from 1836 to 2021, but the start time is dependent on climate variable and temporal resolution.

    The gridded data are produced for daily, monthly, seasonal and annual timescales, as well as long term averages for a set of climatological reference periods. Variables include air temperature (maximum, minimum and mean), precipitation, sunshine, mean sea level pressure, wind speed, relative humidity, vapour pressure, days of snow lying, and days of ground frost.

    This data set supersedes the previous versions of this dataset which also superseded UKCP09 gridded observations. Subsequent versions may be released in due course and will follow the version numbering as outlined by Hollis et al. (2018, see linked documentation).

    The changes for v1.1.0.0 HadUK-Grid datasets are as follows:

    • The addition of data for calendar year 2021

    • The addition of 30 year averages for the new reference period 1991-2020

    • An update to 30 year averages for 1961-1990 and 1981-2010. This is an order of operation change. In this version 30 year averages have been calculated from the underlying monthly/seasonal/annual grids (grid-then-average) in previous version they were grids of interpolated station average (average-then-grid). This order of operation change results in small differences to the values, but provides improved consistency with the monthly/seasonal/annual series grids. However this order of operation change means that 1961-1990 averages are not included for sfcWind or snowlying variables due to the start date for these variables being 1969 and 1971 respectively.

    • A substantial new collection of monthly rainfall data have been added for the period before 1960. These data originate from the rainfall rescue project (Hawkins et al. 2022) and this source now accounts for 84% of pre-1960 monthly rainfall data, and the monthly rainfall series has been extended back to 1836.

    Net changes to the input station data used to generate this dataset:

    -Total of 122664065 observations

    -118464870 (96.5%) unchanged

    -4821 (0.004%) modified for this version

    -4194374 (3.4%) added in this version

    -5887 (0.005%) deleted from this version

    The primary purpose of these data are to facilitate monitoring of UK climate and research into climate change, impacts and adaptation. The datasets have been created by the Met Office with financial support from the Department for Business, Energy and Industrial Strategy (BEIS) and Department for Environment, Food and Rural Affairs (DEFRA) in order to support the Public Weather Service Customer Group (PWSCG), the Hadley Centre Climate Programme, and the UK Climate Projections (UKCP18) project. The output from a number of data recovery activities relating to 19th and early 20th Century data have been used in the creation of this dataset, these activities were supported by: the Met Office Hadley Centre Climate Programme; the Natural Environment Research Council project "Analysis of historic drought and water scarcity in the UK"; the UK Research & Innovation (UKRI) Strategic Priorities Fund UK Climate Resilience programme; The UK Natural Environment Research Council (NERC) Public Engagement programme; the National Centre for Atmospheric Science; National Centre for Atmospheric Science and the NERC GloSAT project; and the contribution of many thousands of public volunteers. The dataset is provided under Open Government Licence.

  15. v

    XMM-Newton Serendipitous Source Catalog from Stacked Observations: Obs. Data...

    • res1catalogd-o-tdatad-o-tgov.vcapture.xyz
    • data.nasa.gov
    • +1more
    Updated Jul 11, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    High Energy Astrophysics Science Archive Research Center (2025). XMM-Newton Serendipitous Source Catalog from Stacked Observations: Obs. Data [Dataset]. https://res1catalogd-o-tdatad-o-tgov.vcapture.xyz/dataset/xmm-newton-serendipitous-source-catalog-from-stacked-observations-obs-data
    Explore at:
    Dataset updated
    Jul 11, 2025
    Dataset provided by
    High Energy Astrophysics Science Archive Research Center
    Description

    The stacked catalog 4XMM-DR14s (XMMSTACK) has been compiled from 1,751 groups, comprising 10,336 overlapping XMM-Newton observations. They were selected from the public observations taken between 2000 February 1 and 2023 November 16 which overlap by at least one arcminute in radius. It contains 427,524 unique sources, 329,972 of them multiply observed, with positions and source parameters like fluxes in the XMM-Newton standard energy bands, hardness ratios, quality estimate, and information on inter-observation variability. The parameters are directly derived from the simultaneous fit, and, wherever applicable, additionally calculated for each contributing observation. Exposures that do not qualify for source detection, for example because of a high background level, are used for subsequent PSF photometry: source fluxes and flux-related parameters are derived for them at the source position and extent found during source detection. 4XMM-DR14s lists 1,807,316 individual flux measurements of the 427,524 unique sources. Stacked source detection aims at exploring the multiply observed sky regions and exploit their survey potential, in particular to study the long-term behavior of X-ray emitting sources. It thus makes use of the long(er) effective exposure time per sky area and offers the opportunity to investigate flux variability directly through the source detection process. The main catalog properties are summarized in the table below, the data processing and the stacked source detection are described in the processing summary. To ensure detection quality, background levels are assessed, and event-based astrometric corrections are applied before running source detection. After source detections, problematic detections and detection parameters are flagged by an automated algorithm. All detections are screened visually, and obviously spurious sources are flagged manually. This table contains the source parameters from the individual observations in the stacked catalog, 4XMM-DR14s. The parameters are derived from the simultaneous source-detection fit to all stacked observations at the common source position for each observation that covers a source, amounting to 1,807,316 measurements. The mean source parameters from stacked source detection are provided in the associated main table 4XMM-DR14s, referred to as XMMSTACK. The authors referred to the EPIC instruments with the following designations: PN, M1 (MOS1), and M2 (MOS2). The energy bands used in the 4XMM processing were the same as for the 3XMM catalog. The following are the basic energy bands:

     1: 0.2-0.5 keV 2: 0.5-1.0 keV 3: 1.0-2.0 keV 4: 2.0-4.5 keV 5: 4.5-12.0 keV 
    All-EPIC values cover the energy range 0.2-12.0 keV. The full catalog documentation can be found at https://res1xmmsscd-o-taipd-o-tde.vcapture.xyz/. The following table gives an overview of the statistics of this catalog in comparison with the previous stacked catalogs, 4XMM-DR14s through 3XMM-DR7s:
     4XMM-DR14s 4XMM-DR13s 4XMM-DR12s 4XMM-DR11s 4XMM-DR10s 4XMM-DR9s 3XMM-DR7s Number of stacks 1,751 1,688 1,620 1,475 1,396 1,329 434 Number of observations 10,336 9,796 9,355 8,292 7,803 6,604 789 Time span first to last observation Feb 01, 2000 Feb 01, 2000 Feb 01, 2000 Feb 03, 2000 Feb 03, 2000 Feb 03, 2000 Feb 20, 2000 -- Nov 16,2023 -- Nov 29, 2022 -- Dec 04, 2021 -- Dec 17, 2020 -- Dec 14, 2019 -- Nov 13, 2018 -- Apr 02, 2016 Approximate sky coverage (sq. deg.) 685 650 625 560 540 485 150 Approximate multiply observed sky area(sq. deg) 440 420 400 350 335 300 100 Total number of sources 427,524 401,596 386,043 358,809 335,812 288,191 71,951 Sources with several contributing observations 329,972 310,478 298,626 275,440 256,213 218,283 57,665 Multiply observed sources with flag 0 or 1 276,058 262,842 252,445 233,542 216,999 191,497 55,450 Multiply observed with a total detection 266,129 251,555 241,880 224,178 208,921 181,132 49,935 likelihood of at least six Multiply observed with a total detection 226,219 213,812 205,394 189,556 176,680 153,487 42,077 likelihood of at least ten Total measurements 1,807,316 1,683,264 1,592,263 1,421,966 1,322,299 1,033,264 216,393 Maximum exposures per source 173 170 155 140 140 103 69 Maximum observations per source 77 77 70 65 65 40 23 Maximum on-time per source 2.8 Ms 2.8 Ms 2.8 Ms 2.8 Ms 2.8 Ms 1.9 Ms 1.3 Ms 
    This database table was last updated by the HEASARC in July 2024. It contains the 4XMM-DR14s observations catalog, released by ESA on 2024-07-09 and obtained from the XMM-Newton Survey Science Center Consortium at https://res1xmmsscd-o-taipd-o-tde.vcapture.xyz/cms/catalogues/4xmm-dr14s/. It is also available as a gzipped FITS file. This is a service provided by NASA HEASARC .

  16. e

    UBVRI photometry of FK5 Ext. stars - Dataset - B2FIND

    • b2find.eudat.eu
    Updated Apr 19, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2023). UBVRI photometry of FK5 Ext. stars - Dataset - B2FIND [Dataset]. https://b2find.eudat.eu/dataset/433ee6e7-e8ea-54d2-a010-fc77a957cdec
    Explore at:
    Dataset updated
    Apr 19, 2023
    Description

    Table 1 includes: the star number in the E-regions; the HD or CPD number; the V magnitude; the B-V, U-V, V-R and R-I color indices; e, the mean square error of the averages published (unit: 0.001 magnitude); n, the number of observations for each star; and Sp, the spectral types (Graham, 1982PASP...94..244G). Table 2 gives similar information for the program stars. An asterisk after the star number means that the observations were made with a diaphragm, and a v means a variable star. The results of the observations of the FK5 Extension stars Nx 4173, 4409, 5355, 5400, 5410, 5593, 5671, 5886 and 6056 shows that they are variable stars; the published results correspond to the mean values obtained from the observations.

  17. n

    Global Surface Summary of the Day - GSOD

    • data.noaa.gov
    • ncei.noaa.gov
    • +3more
    csv, https
    Updated Feb 12, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Global Surface Summary of the Day - GSOD [Dataset]. https://data.noaa.gov/onestop/collections/details/33ac52f2-4da3-471c-9059-7f4485baa498
    Explore at:
    https, csvAvailable download formats
    Dataset updated
    Feb 12, 2025
    Time period covered
    Jan 1, 1929 - Present
    Area covered
    Earth, geographic bounding box, Geographic Region > Global Land, Vertical Location > Land Surface
    Description

    Global Surface Summary of the Day is derived from The Integrated Surface Hourly (ISH) dataset. The ISH dataset includes global data obtained from the USAF Climatology Center, located in the Federal Climate Complex with NCDC. The latest daily summary data are normally available 1-2 days after the date-time of the observations used in the daily summaries. The online data files begin with 1929 and are at the time of this writing at the Version 8 software level. Over 9000 stations' data are typically available. The daily elements included in the dataset (as available from each station) are: Mean temperature (.1 Fahrenheit) Mean dew point (.1 Fahrenheit) Mean sea level pressure (.1 mb) Mean station pressure (.1 mb) Mean visibility (.1 miles) Mean wind speed (.1 knots) Maximum sustained wind speed (.1 knots) Maximum wind gust (.1 knots) Maximum temperature (.1 Fahrenheit) Minimum temperature (.1 Fahrenheit) Precipitation amount (.01 inches) Snow depth (.1 inches) Indicator for occurrence of: Fog, Rain or Drizzle, Snow or Ice Pellets, Hail, Thunder, Tornado/Funnel Cloud Global summary of day data for 18 surface meteorological elements are derived from the synoptic/hourly observations contained in USAF DATSAV3 Surface data and Federal Climate Complex Integrated Surface Hourly (ISH). Historical data are generally available for 1929 to the present, with data from 1973 to the present being the most complete. For some periods, one or more countries' data may not be available due to data restrictions or communications problems. In deriving the summary of day data, a minimum of 4 observations for the day must be present (allows for stations which report 4 synoptic observations/day). Since the data are converted to constant units (e.g, knots), slight rounding error from the originally reported values may occur (e.g, 9.9 instead of 10.0). The mean daily values described below are based on the hours of operation for the station. For some stations/countries, the visibility will sometimes 'cluster' around a value (such as 10 miles) due to the practice of not reporting visibilities greater than certain distances. The daily extremes and totals--maximum wind gust, precipitation amount, and snow depth--will only appear if the station reports the data sufficiently to provide a valid value. Therefore, these three elements will appear less frequently than other values. Also, these elements are derived from the stations' reports during the day, and may comprise a 24-hour period which includes a portion of the previous day. The data are reported and summarized based on Greenwich Mean Time (GMT, 0000Z - 2359Z) since the original synoptic/hourly data are reported and based on GMT.

  18. d

    HUN GW Uncertainty Analysis v01

    • data.gov.au
    • data.wu.ac.at
    zip
    Updated Jun 27, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bioregional Assessment Program (2022). HUN GW Uncertainty Analysis v01 [Dataset]. https://data.gov.au/dataset/3b9239f2-561b-47f4-b5f5-eb3bea4bdd47
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 27, 2022
    Dataset provided by
    Bioregional Assessment Program
    License

    Attribution 2.5 (CC BY 2.5)https://creativecommons.org/licenses/by/2.5/
    License information was derived automatically

    Description

    Abstract The dataset was derived by the Bioregional Assessment Programme from multiple source datasets. The source datasets are identified in the Lineage field in this metadata statement. The …Show full descriptionAbstract The dataset was derived by the Bioregional Assessment Programme from multiple source datasets. The source datasets are identified in the Lineage field in this metadata statement. The processes undertaken to produce this derived dataset are described in the History field in this metadata statement. This dataset contains all the scripts used to carry out the uncertainty analysis for the maximum drawdown and time to maximum drawdown at the groundwater receptors in the Hunter bioregion and all the resulting posterior predictions. This is described in product 2.6.2 Groundwater numerical modelling (Herron et al. 2016). See History for a detailed explanation of the dataset contents. References: Herron N, Crosbie R, Peeters L, Marvanek S, Ramage A and Wilkins A (2016) Groundwater numerical modelling for the Hunter subregion. Product 2.6.2 for the Hunter subregion from the Northern Sydney Basin Bioregional Assessment. Department of the Environment, Bureau of Meteorology, CSIRO and Geoscience Australia, Australia. Dataset History This dataset uses the results of the design of experiment runs of the groundwater model of the Hunter subregion to train emulators to (a) constrain the prior parameter ensembles into the posterior parameter ensembles and to (b) generate the predictive posterior ensembles of maximum drawdown and time to maximum drawdown. This is described in product 2.6.2 Groundwater numerical modelling (Herron et al. 2016). A flow chart of the way the various files and scripts interact is provided in HUN_GW_UA_Flowchart.png (editable version in HUN_GW_UA_Flowchart.gliffy). R-script HUN_DoE_Parameters.R creates the set of parameters for the design of experiment in HUN_DoE_Parameters.csv. Each of these parameter combinations is evaluated with the groundwater model (dataset HUN GW Model v01). Associated with this spreadsheet is file HUN_GW_Parameters.csv. This file contains, for each parameter, if it is included in the sensitivity analysis, tied to another parameters, the initial value and range, the transformation, the type of prior distribution with its mean and covariance structure. The results of the design of experiment model runs are summarised in files HUN_GW_dmax_DoE_Predictions.csv, HUN_GW_tmax_DoE_Predictions.csv, HUN_GW_DoE_Observations.csv, HUN_GW_DoE_mean_BL_BF_hist.csv which have the maximum additional drawdown, the time to maximum additional drawdown for each receptor and the simulated equivalents to observed groundwater levels and SW-GW fluxes respectively. These are generated with post-processing scripts in dataset HUN GW Model v01 from the output (as exemplified in dataset HUN GW Model simulate ua999 pawsey v01). Spreadsheets HUN_GW_dmax_Predictions.csv and HUN_GW_tmax_Predictions.csv capture additional information on each prediction; the name of the prediction, transformation, min, max and median of design of experiment, a boolean to indicate the prediction is to be included in the uncertainty analysis, the layer it is assigned to and which objective function to use to constrain the prediction. Spreadsheet HUN_GW_Observations.csv has additional information on each observation; the name of the observation, a boolean to indicate to use the observation, the min and max of the design of experiment, a metadata statement describing the observation, the spatial coordinates, the observed value and the number of observations at this location (from dataset HUN bores v01). Further it has the distance of each bore to the nearest blue line network and the distance to each prediction (both in km). Spreadsheet HUN_GW_mean_BL_BF_hist.csv has similar information, but on the SW-GW flux. The observed values are from dataset HUN Groundwater Flowrate Time Series v01 These files are used in script HUN_GW_SI.py to generate sensitivity indices (based on the Plischke et al. (2013) method) for each group of observations and predictions. These indices are saved in spreadsheets HUN_GW_dmax_SI.csv, HUN_GW_tmax_SI.csv, HUN_GW_hobs_SI.py, HUN_GW_mean_BF_hist_SI.csv Script HUN_GW_dmax_ObjFun.py calculates the objective function values for the design of experiment runs. Each prediction has a tailored objective function which is a weighted sum of the residuals between observations and predictions with weights based on the distance between observation and prediction. In addition to that there is an objective function for the baseflow rates. The results are stored in HUN_GW_DoE_ObjFun.csv and HUN_GW_ObjFun.csv. The latter files are used in scripts HUN_GW_dmax_CreatePosteriorParameters.R to carry out the Monte Carlo sampling of the prior parameter distributions with the Approximate Bayesian Computation methodology as described in Herron et al (2016) by generating and applying emulators for each objective function. The scripts use the scripts in dataset R-scripts for uncertainty analysis v01. These files are run on the high performance computation cluster machines with batch file HUN_GW_dmax_CreatePosterior.slurm. These scripts result in posterior parameter combinations for each objective function, stored in directory PosteriorParameters, with filename convention HUN_GW_dmax_Posterior_Parameters_OO_$OFName$.csv where $OFName$ is the name of the objective function. Python script HUN_GW_PosteriorParameters_Percentiles.py summarizes these posterior parameter combinations and stores the results in HUN_GW_PosteriorParameters_Percentiles.csv. The same set of spreadsheets is used to test convergence of the emulator performance with script HUN_GW_emulator_convergence.R and batch file HUN_GW_emulator_convergence.slurm to produce spreadsheet HUN_GW_convergence_objfun_BF.csv. The posterior parameter distributions are sampled with scripts HUN_GW_dmax_tmax_MCsampler.R and associated .slurm batch file. The script create and apply an emulator for each prediction. The emulator and results are stored in directory Emulators. This directory is not part of the this dataset but can be regenerated by running the scripts on the high performance computation clusters. A single emulator and associated output is included for illustrative purposes. Script HUN_GW_collate_predictions.csv collates all posterior predictive distributions in spreadsheets HUN_GW_dmax_PosteriorPredictions.csv and HUN_GW_tmax_PosteriorPredictions.csv. These files are further summarised in spreadsheet HUN_GW_dmax_tmax_excprob.csv with script HUN_GW_exc_prob. This spreadsheet contains for all predictions the coordinates, layer, number of samples in the posterior parameter distribution and the 5th, 50th and 95th percentile of dmax and tmax, the probability of exceeding 1 cm and 20 cm drawdown, the maximum dmax value from the design of experiment and the threshold of the objective function and the acceptance rate. The script HUN_GW_dmax_tmax_MCsampler.R is also used to evaluate parameter distributions HUN_GW_dmax_Posterior_Parameters_HUN_OF_probe439.csv and HUN_GW_dmax_Posterior_Parameters_Mackie_OF_probe439.csv. These are, for one predictions, different parameter distributions, in which the latter represents local information. The corresponding dmax values are stored in HUN_GW_dmax_probe439_HUN.csv and HUN_GW_dmax_probe439_Mackie.csv Dataset Citation Bioregional Assessment Programme (XXXX) HUN GW Uncertainty Analysis v01. Bioregional Assessment Derived Dataset. Viewed 13 March 2019, http://data.bioregionalassessments.gov.au/dataset/c25db039-5082-4dd6-bb9d-de7c37f6949a. Dataset Ancestors Derived From HUN GW Model code v01 Derived From Hydstra Groundwater Measurement Update - NSW Office of Water, Nov2013 Derived From Groundwater Economic Elements Hunter NSW 20150520 PersRem v02 Derived From NSW Office of Water - National Groundwater Information System 20140701 Derived From Travelling Stock Route Conservation Values Derived From HUN GW Model v01 Derived From NSW Wetlands Derived From Climate Change Corridors Coastal North East NSW Derived From Communities of National Environmental Significance Database - RESTRICTED - Metadata only Derived From Climate Change Corridors for Nandewar and New England Tablelands Derived From National Groundwater Dependent Ecosystems (GDE) Atlas Derived From Fauna Corridors for North East NSW Derived From R-scripts for uncertainty analysis v01 Derived From Asset database for the Hunter subregion on 27 August 2015 Derived From Hunter CMA GDEs (DRAFT DPI pre-release) Derived From Estuarine Macrophytes of Hunter Subregion NSW DPI Hunter 2004 Derived From Birds Australia - Important Bird Areas (IBA) 2009 Derived From Camerons Gorge Grassy White Box Endangered Ecological Community (EEC) 2008 Derived From Asset database for the Hunter subregion on 16 June 2015 Derived From Spatial Threatened Species and Communities (TESC) NSW 20131129 Derived From Gippsland Project boundary Derived From Bioregional Assessment areas v04 Derived From Asset database for the Hunter subregion on 24 February 2016 Derived From Natural Resource Management (NRM) Regions 2010 Derived From Gosford Council Endangered Ecological Communities (Umina woodlands) EEC3906 Derived From NSW Office of Water Surface Water Offtakes - Hunter v1 24102013 Derived From National Groundwater Dependent Ecosystems (GDE) Atlas (including WA) Derived From Bioregional Assessment areas v03 Derived From HUN groundwater flow rate time series v01 Derived From Asset list for Hunter - CURRENT Derived From NSW Office of Water Surface Water Entitlements Locations v1_Oct2013 Derived From Species Profile and Threats Database (SPRAT) - Australia - Species of National Environmental Significance Database (BA subset - RESTRICTED - Metadata only) Derived From HUN GW Model simulate ua999 pawsey v01 Derived From Northern Rivers CMA GDEs (DRAFT DPI

  19. e

    Abbadia Catalogue of 14263 Stars, +16 to +24{deg} - Dataset - B2FIND

    • b2find.eudat.eu
    Updated Oct 23, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2023). Abbadia Catalogue of 14263 Stars, +16 to +24{deg} - Dataset - B2FIND [Dataset]. https://b2find.eudat.eu/dataset/cc33d14a-b7f6-5281-b991-332a89cd0102
    Explore at:
    Dataset updated
    Oct 23, 2023
    Description

    This catalog contains meridian circle observations of 14192 reference stars in the Paris Observatory zone of the Astrographic Catalog, +16 to +24deg, made from 1899 to 1906. The original catalog contains also a supplement of 81 stars, which is not included here. The positions have been reduced to 1900.0 on the basis of Newcomb's constants. The probable errors for most stars range from 0.0093s to 0.0161s in right ascension and from 0.096" to 0.162" in declination, depending on the number of observations. In addition to the positions, the catalog contains a running number, the magnitude from the Berlin catalogs, the mean epoch and number of observations, and the BD number.

  20. ERA5 post-processed daily statistics on single levels from 1940 to present

    • cds.climate.copernicus.eu
    grib
    Updated Sep 23, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    ECMWF (2025). ERA5 post-processed daily statistics on single levels from 1940 to present [Dataset]. http://doi.org/10.24381/cds.4991cf48
    Explore at:
    gribAvailable download formats
    Dataset updated
    Sep 23, 2025
    Dataset provided by
    European Centre for Medium-Range Weather Forecastshttp://ecmwf.int/
    Authors
    ECMWF
    License

    https://object-store.os-api.cci2.ecmwf.int:443/cci2-prod-catalogue/licences/cc-by/cc-by_f24dc630aa52ab8c52a0ac85c03bc35e0abc850b4d7453bdc083535b41d5a5c3.pdfhttps://object-store.os-api.cci2.ecmwf.int:443/cci2-prod-catalogue/licences/cc-by/cc-by_f24dc630aa52ab8c52a0ac85c03bc35e0abc850b4d7453bdc083535b41d5a5c3.pdf

    Time period covered
    Jan 1, 1940 - Sep 17, 2025
    Description

    ERA5 is the fifth generation ECMWF reanalysis for the global climate and weather for the past 8 decades. Data is available from 1940 onwards. ERA5 replaces the ERA-Interim reanalysis. Reanalysis combines model data with observations from across the world into a globally complete and consistent dataset using the laws of physics. This principle, called data assimilation, is based on the method used by numerical weather prediction centres, where every so many hours (12 hours at ECMWF) a previous forecast is combined with newly available observations in an optimal way to produce a new best estimate of the state of the atmosphere, called analysis, from which an updated, improved forecast is issued. Reanalysis works in the same way, but at reduced resolution to allow for the provision of a dataset spanning back several decades. Reanalysis does not have the constraint of issuing timely forecasts, so there is more time to collect observations, and when going further back in time, to allow for the ingestion of improved versions of the original observations, which all benefit the quality of the reanalysis product. This catalogue entry provides post-processed ERA5 hourly single-level data aggregated to daily time steps. In addition to the data selection options found on the hourly page, the following options can be selected for the daily statistic calculation:

    The daily aggregation statistic (daily mean, daily max, daily min, daily sum*) The sub-daily frequency sampling of the original data (1 hour, 3 hours, 6 hours) The option to shift to any local time zone in UTC (no shift means the statistic is computed from UTC+00:00)

    *The daily sum is only available for the accumulated variables (see ERA5 documentation for more details). Users should be aware that the daily aggregation is calculated during the retrieval process and is not part of a permanently archived dataset. For more details on how the daily statistics are calculated, including demonstrative code, please see the documentation. For more details on the hourly data used to calculate the daily statistics, please refer to the ERA5 hourly single-level data catalogue entry and the documentation found therein.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
iNaturalist contributors; iNaturalist contributors (2025). iNaturalist Research-grade Observations [Dataset]. http://doi.org/10.15468/ab3s5x
Organization logoOrganization logo

iNaturalist Research-grade Observations

Explore at:
418 scholarly articles cite this dataset (View in Google Scholar)
Dataset updated
Sep 23, 2025
Dataset provided by
iNaturalisthttp://inaturalist.org/
Global Biodiversity Information Facilityhttps://www.gbif.org/
Authors
iNaturalist contributors; iNaturalist contributors
License

Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically

Time period covered
Sep 17, 1768 - Sep 16, 2025
Area covered
Description

Observations from iNaturalist.org, an online social network of people sharing biodiversity information to help each other learn about nature.

Observations included in this archive met the following requirements:

* Published under one of the following licenses or waivers: 1) https://creativecommons.org/publicdomain/zero/1.0/, 2) https://creativecommons.org/licenses/by/4.0/, 3) https://creativecommons.org/licenses/by-nc/4.0/

* Achieved one of following iNaturalist quality grades: Research

* Created on or before 2025-09-16 15:00:20 -0700

You can view observations meeting these requirements at https://www.inaturalist.org/observations?created_d2=2025-09-16+15%3A00%3A20+-0700&d1=1600-01-01&license=CC0%2CCC-BY%2CCC-BY-NC&quality_grade=research

Search
Clear search
Close search
Google apps
Main menu