Climate Data Online is a collection of climatic data that offers public access and consumption via discovery and ordering services. The data available through CDO is available at no charge and can be viewed online or ordered and delivered to your email inbox.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
NOAA's Climate Data Records (CDRs) are robust, sustainable, and scientifically sound climate records that provide trustworthy information on how, where, and to what extent the land, oceans, atmosphere and ice sheets are changing. These datasets are thoroughly vetted time series measurements with the longevity, consistency, and continuity to assess and measure climate variability and change. NOAA CDRs are vetted using standards established by the National Research Council (NRC).
Climate Data Records are created by merging data from surface, atmosphere, and space-based systems across decades. NOAA’s Climate Data Records provides authoritative and traceable long-term climate records. NOAA developed CDRs by applying modern data analysis methods to historical global satellite data. This process can clarify the underlying climate trends within the data and allows researchers and other users to identify economic and scientific value in these records. NCEI maintains and extends CDRs by applying the same methods to present-day and future satellite measurements.
Terrestrial CDRs are composed of sensor data that have been improved and quality controlled over time, together with ancillary calibration data.
This archived Paleoclimatology Study is available from the NOAA National Centers for Environmental Information (NCEI), under the World Data Service (WDS) for Paleoclimatology. The associated NCEI study type is Other Collections. The data include parameters of database with a geographic location of . The time period coverage is from Unavailable begin date to Unavailable end date in calendar years before present (BP). See metadata information for parameter and study location details. Please cite this study when using the data.
Global Surface Summary of the Day is derived from The Integrated Surface Hourly (ISH) dataset. The ISH dataset includes global data obtained from the USAF Climatology Center, located in the Federal Climate Complex with NCDC. The latest daily summary data are normally available 1-2 days after the date-time of the observations used in the daily summaries. The online data files begin with 1929 and are at the time of this writing at the Version 8 software level. Over 9000 stations' data are typically available. The daily elements included in the dataset (as available from each station) are: Mean temperature (.1 Fahrenheit) Mean dew point (.1 Fahrenheit) Mean sea level pressure (.1 mb) Mean station pressure (.1 mb) Mean visibility (.1 miles) Mean wind speed (.1 knots) Maximum sustained wind speed (.1 knots) Maximum wind gust (.1 knots) Maximum temperature (.1 Fahrenheit) Minimum temperature (.1 Fahrenheit) Precipitation amount (.01 inches) Snow depth (.1 inches) Indicator for occurrence of: Fog, Rain or Drizzle, Snow or Ice Pellets, Hail, Thunder, Tornado/Funnel Cloud Global summary of day data for 18 surface meteorological elements are derived from the synoptic/hourly observations contained in USAF DATSAV3 Surface data and Federal Climate Complex Integrated Surface Hourly (ISH). Historical data are generally available for 1929 to the present, with data from 1973 to the present being the most complete. For some periods, one or more countries' data may not be available due to data restrictions or communications problems. In deriving the summary of day data, a minimum of 4 observations for the day must be present (allows for stations which report 4 synoptic observations/day). Since the data are converted to constant units (e.g, knots), slight rounding error from the originally reported values may occur (e.g, 9.9 instead of 10.0). The mean daily values described below are based on the hours of operation for the station. For some stations/countries, the visibility will sometimes 'cluster' around a value (such as 10 miles) due to the practice of not reporting visibilities greater than certain distances. The daily extremes and totals--maximum wind gust, precipitation amount, and snow depth--will only appear if the station reports the data sufficiently to provide a valid value. Therefore, these three elements will appear less frequently than other values. Also, these elements are derived from the stations' reports during the day, and may comprise a 24-hour period which includes a portion of the previous day. The data are reported and summarized based on Greenwich Mean Time (GMT, 0000Z - 2359Z) since the original synoptic/hourly data are reported and based on GMT.
The IRI Data Library is a powerful and freely accessible online data repository and analysis tool that allows a user to view, manipulate, and download over 400 climate-related data sets through a …Show full descriptionThe IRI Data Library is a powerful and freely accessible online data repository and analysis tool that allows a user to view, manipulate, and download over 400 climate-related data sets through a standard web browser. The Data Library contains a wide variety of publicly available data sets, including station and gridded atmospheric and oceanic observations and analyses, model-based analyses and forecasts, and land surface and vegetation data sets, from a range of sources. It includes a flexible, interactive data viewer that allows a user to visualize. multi-dimensional data sets in several combinations, create animations, and customize and download plots and maps in a variety of image formats. The Data Library is also a powerful computational engine that can perform analyses of varying complexity using an extensive array of statistical analysis tools. Online tutorials and function documentation are available to aid the user in applying these tools to the holdings available in the Data Library. Data sets and the results of any calculations performed by the user can be downloaded in a wide variety of file formats, from simple ascii text to GIS-compatible files to fully self-describing formats, or transferred directly to software applications that use the OPeNDAP protocol. This flexibility allows the Data Library to be used as a collaborative tool among different disciplines and to build new data discovery and analysis tools.
This resource demonstrates the workflow developed to prepare downscaled GCM data for input to Model My Watershed (ModelMyWatershed.org). GCM data for the Delaware River Basin was assembled from 19 GCMs including each model's RCP4.5 and RCP8.5; this was performed by Dr. Tim Hawkins, Shippensburg University (http://www.ship.edu/geo-ess/). Downscaled precipitation data from global climate models (GCM) does not accurately retain the magnitude and frequency of individual storm events for a given location. This lack of predictive resolution of event magnitude and frequency limits realism of rainfall-runoff models used to for predicting watershed hydrology under future climate scenarios. To address this problem, Maimone et al (2019) developed a method for summarizing the statistical distribution of precipitation event magnitude and frequency that could be applied to downscaled GCM precipitation predictions. Application of the methods here to down-scaled GCM scenarios requires that the those predictions do not include an increase in the number of days of precipitation per year. Maimone et al (2019) state this requirement: "Because GCM projections for the Philadelphia region do not indicate an increase in the number of wet days per year, future increases in precipitation are the result of the existing number and distribution of wet days becoming more intense."
I developed a workflow to replicate Maimone et al's methods and provide an example of it in this Resource. There are three sections of the R Markdown document. The first section seeks to replicate the synthetic weather generator developed by Maimone et al (2019) using an example dataset. The second section applies those methods to the downscaled GCM ensemble average conditions for the Delaware River Basin provided by Dr. Hawkins. The third section develops depth-duration-frequency statistics for the 24 hour storm event relevant to the 2080-2100 predictions. To open the R Markdown document and execute the workflow yourself, find the Open With dropdown list in the upper right hand corner of this Resource and select CUAHSI JupyterHub.
The first section uses an example precipitation dataset from the Philadelphia Airport for the period 01 January 1995 through 31 December 2013. The data were downloaded from NOAA's Climate Data Online Search portal: https://www.ncdc.noaa.gov/cdo-web/search.
The downloaded data and metadata for this NOAA Climate Data are available on Hydroshare here: http://www.hydroshare.org/resource/60058ceda8334e68be141516c5b8de3f. Additional data on precipitation frequency at the Philadelphia Airport was downloaded from the NOAA Hydrometeorological Design Studies Center: https://hdsc.nws.noaa.gov/hdsc/pfds/index.html.
An example of working with this type of NOAA Climate Data is provided on the NEON website here: https://www.neonscience.org/da-viz-coop-precip-data-R.
References: Maimone, M., S. Malter, J. Rockwell, and V. Raj. 2019. Transforming Global Climate Model Precipitation Output for Use in Urban Stormwater Applications. Journal of Water Resources Planning and Management 145:04019021.
This data package contains locally verified daily meteorological observations from a NOAA National Weather Service station located at the USDA Jornada Experimental Range headquarters in southern New Mexico, USA. Daily data has been collected there by USDA staff since 1914 for minimum and maximum air temperature and daily accumulated precipitation using standard U.S. climatological service instrumentation and procedures. The included data were verified and transcribed directly from the original paper data sheets and have undergone quality control and assurance procedures different than those in place at NOAA. These data therefore differ from those directly downloadable from NOAA servers. Local verification and transcription of observations from the data sheets ceased in 2006 and data are now directly entered to the NOAA system. Therefore, this dataset is complete and will no longer be added to.All observations from this weather station have also undergone NOAA QA/QC procedures and those data are available by accessing the Jornada Experimental Range, NM US GHCN station through the National Climatic Data Center portal (https://www.ncdc.noaa.gov/cdo-web/datasets/GHCND/stations/GHCND:USC00294426/detail - daily and monthly data are available).
The climate station at the Macon Middle School near Franklin, NC was established in February 2000. The objectives were to: 1) locate a climate station outside of the Coweeta Basin, but within Macon County, North Carolina, measuring air temperature, air humidity, precipitation, solar radiation, wind speed, wind direction, vapor pressure, and barometric pressure 2) provide students and teachers at the school with a unique teaching tool, and 3) provide the public with an online archive of weather data.
The GPCP Daily analysis is a companion to the GPCP Monthly analysis, and provides globally complete precipitation estimates at a spatial resolution of one degree latitude-longitude and daily time ... scale from October 1996 to the present. The data set is part of World Climate Research Program (WCRP) and GEWEX activities, being part of the array of data sets describing the water and energy cycles of the planet under the auspices of the GEWEX Data and Assessment Panel (GDAP).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Weather data from two weather stations at Stuttgart Rice Research and Extension center are archived. Current air temperature, relative humidity, wind speed, solar radiation and soil temperature data are provided by station and are displayed and archived either hourly or daily. Historical weather data goes back to 2008. Resources in this dataset: Resource Title: Weather Station Data. File Name: Web Page, url: https://www.ars.usda.gov/southeast-area/stuttgart-ar/dale-bumpers-national-rice-research-center/docs/weather-station-data/
This data release consists of Network Common Data Form (NetCDF) data sets of daily total-precipitation and minimum and maximum air temperatures for the time period from January 1, 1895 to December 31, 1915. These data sets are based on individual station data obtained for 153 National Oceanic and Atmospheric Administration (NOAA) weather stations in Florida and parts of Georgia, Alabama, and South Carolina (available at http://www.ncdc.noaa.gov/cdo-web/results). Weather station data were used to produce a total of 23,007 daily raster surfaces (7,669 daily raster surfaces for each of the 3 data sets) using a thin-plate-spline method of interpolation. The geographic extent of the weather station data coincides with the geographic extent of the Floridan aquifer system, with the exception of a small portion of southeast Mississippi where the Floridan aquifer system is saline and was not used.
Link to all products and services:
http://www.sws.uiuc.edu
National Centers of Environmental Information (NCEI) and the Illinois State Water Survey in Champaign, Illinois. Our center is a partner in a national climate service program that includes NCEI, five other Regional Climate Centers, and State Climate Offices. NCEI is part of the Department of Commerce, National Oceanic and Atmospheric Administration (NOAA).
The MRCC serves the nine-state Midwest region (Illinois, Indiana, Iowa, Kentucky, Michigan, Minnesota, Missouri, Ohio, and Wisconsin). Our services and research help to better explain climate and its impacts on the Midwest, provide practical solutions to specific climate problems, and allow us to develop climate information for the Midwest on climate-sensitive issues such as agriculture, climate change, energy, the environment, human health, risk management, transportation, and water resources.
Among the types of information available on cli-MATE are:
Near real-time data for many active reporting sites User-defined (both time period and climate elements) maps of climate data Tables of current and historical climate data by Climate summaries for individual stations
Weekly crop yield risk assessments for corn and soybeans Daily soil moisture estimates Drought indices
Daily climate data [digital] for several thousand stations across the United States. Parameters reported include: high, low, and mean temperatures; precipitation; snowfall; snow depth and degree days. Limited data is available on pan evaporation, and soil temperatures. Many of these stations go back to 1948, although some stations go back to the turn of the century.
Surface hourly observations [digital] for over 100 sites in the eastern half of the U.S. Parameters reported include: air temperature, dewpoint, wet-bulb, pressure, relative humidity, wind speed and wind direction.
Hourly precipitation [digital] for select Midwestern sites.
Storm Data for flood, hail, high wind, tornado, blizzard and any other strange or unusual weather reports.
Historical Climate Division (digital)data back to 1895 for temperature, precipitation, degree days and Palmer drought indices on a monthly basis.
Solar radiation [digital] data for select sites are available on a daily, monthly or annual basis.
Potential evapotranspiration [digital] data for select sites are available on a daily, monthly or annual basis.
Modeled soil moisture [digital] data for Midwestern climate divisions back to 1949 on a weekly basis.
CD-ROM Products
[Text Extracted from the MRCC Home Page]
The U.S. Daily Climate Normals (DSI-9641D) are based on monthly maximum, minimum, and mean temperature and monthly total precipitation records for each year in the 30-year period 1971-2000, inclusive (as well as separately computed monthly degree day totals). The monthly values are available in data set DSI-9641C or publication online (Climatography of the United States, No. 81 Monthly Station Normals of Temperature, Precipitation, and Heating and Cooling Degree Days, 1971-2000). In order to be included in the normals, a station had to have at least 10 years of monthly temperature data or 10 years of monthly precipitation data for each month in the period 1971-2000. In addition, a station had to be active since January 1, 1999, or had to be included as a normals station in the 1961-1990 normals.
https://object-store.os-api.cci2.ecmwf.int:443/cci2-prod-catalogue/licences/licence-to-use-copernicus-products/licence-to-use-copernicus-products_b4b9451f54cffa16ecef5c912c9cebd6979925a956e3fa677976e0cf198c2c18.pdfhttps://object-store.os-api.cci2.ecmwf.int:443/cci2-prod-catalogue/licences/licence-to-use-copernicus-products/licence-to-use-copernicus-products_b4b9451f54cffa16ecef5c912c9cebd6979925a956e3fa677976e0cf198c2c18.pdf
ERA5 is the fifth generation ECMWF reanalysis for the global climate and weather for the past 8 decades. Data is available from 1940 onwards. ERA5 replaces the ERA-Interim reanalysis. Reanalysis combines model data with observations from across the world into a globally complete and consistent dataset using the laws of physics. This principle, called data assimilation, is based on the method used by numerical weather prediction centres, where every so many hours (12 hours at ECMWF) a previous forecast is combined with newly available observations in an optimal way to produce a new best estimate of the state of the atmosphere, called analysis, from which an updated, improved forecast is issued. Reanalysis works in the same way, but at reduced resolution to allow for the provision of a dataset spanning back several decades. Reanalysis does not have the constraint of issuing timely forecasts, so there is more time to collect observations, and when going further back in time, to allow for the ingestion of improved versions of the original observations, which all benefit the quality of the reanalysis product. ERA5 provides hourly estimates for a large number of atmospheric, ocean-wave and land-surface quantities. An uncertainty estimate is sampled by an underlying 10-member ensemble at three-hourly intervals. Ensemble mean and spread have been pre-computed for convenience. Such uncertainty estimates are closely related to the information content of the available observing system which has evolved considerably over time. They also indicate flow-dependent sensitive areas. To facilitate many climate applications, monthly-mean averages have been pre-calculated too, though monthly means are not available for the ensemble mean and spread. ERA5 is updated daily with a latency of about 5 days. In case that serious flaws are detected in this early release (called ERA5T), this data could be different from the final release 2 to 3 months later. In case that this occurs users are notified. The data set presented here is a regridded subset of the full ERA5 data set on native resolution. It is online on spinning disk, which should ensure fast and easy access. It should satisfy the requirements for most common applications. An overview of all ERA5 datasets can be found in this article. Information on access to ERA5 data on native resolution is provided in these guidelines. Data has been regridded to a regular lat-lon grid of 0.25 degrees for the reanalysis and 0.5 degrees for the uncertainty estimate (0.5 and 1 degree respectively for ocean waves). There are four main sub sets: hourly and monthly products, both on pressure levels (upper air fields) and single levels (atmospheric, ocean-wave and land surface quantities). The present entry is "ERA5 hourly data on single levels from 1940 to present".
The NOAA Monthly U.S. Climate Gridded Dataset (NClimGrid) consists of four climate variables derived from the GHCN-D dataset: maximum temperature, minimum temperature, average temperature and precipitation. Each file provides monthly values in a 5x5 lat/lon grid for the Continental United States. Data is available from 1895 to the present. On an annual basis, approximately one year of "final" nClimGrid will be submitted to replace the initially supplied "preliminary" data for the same time period. Users should be sure to ascertain which level of data is required for their research.
EpiNOAA is an analysis ready dataset that consists of a daily time-series of nClimGrid measures (maximum temperature, minimum temperature, average temperature, and precipitation) at the county scale. Each file provides daily values for the Continental United States. Data are available from 1951 to the present. Daily data are updated every 3 days with a preliminary data file and replaced with the scaled (i.e., quality controlled) data file every three months. This derivative data product is an enhancement from the original daily nClimGrid dataset in that all four weather parameters are now packaged into one file and assembled in a daily time-series format. In addition to a direct download option, an R package and web interface has been developed to streamline access to the final data product. These options allow end users three separate access modes to arrive at a customized dataset unique to each end user’s application. Users should be sure to review the data documentation to inform which level of data is required for their research.
The data we used for this study include species occurrence data (n=15 species), climate data and predictions, an expert opinion questionnaire, and species masks that represented the model domain for each species. For this data release, we include the results of the expert opinion questionnaire and the species model domains (or masks). We developed an expert opinion questionnaire to gather information regarding expert opinion regarding the importance of climate variables in determining a species geographic range. The species masks, or model domains, were defined separately for each species using a variation of the “target-group” approach (Phillips et al. 2009), where the domain was determine using convex polygons including occurrence data for at least three phylogenetically related and similar species (Watling et al. 2012). The species occurrence data, climate data, and climate predictions are freely available online, and therefore not included in this data release. The species occurrence data were obtained primarily from the online database Global Biodiversity Information Facility (GBIF; http://www.gbif.org/), and from scientific literature (Watling et al. 2011). Climate data were obtained from the WorldClim database (Hijmans et al. 2005) and climate predictions were obtained from the Center for Ocean-Atmosphere Prediction Studies (COAPS) at Florida State University (https://floridaclimateinstitute.org/resources/data-sets/regional-downscaling). See metadata for references.
description: ABSTRACT: This data set is a subset of "Global Monthly Climatology for the 20th Century (New et al.)" (2000a). This subset characterizes mean monthly surface climate over the study area of the Large Scale Biosphere-Atmosphere Experiment in Amazonia (LBA) in South America (i.e., latitude 10 N to 25 S, longitude 30 to 85 W) during nearly all of the 20th Century. The data are gridded at 0.5-degree latitude/longitude resolution and include seven variables: precipitation, mean temperature, diurnal temperature range, wet-day frequency, vapour pressure, cloud cover, and ground-frost frequency. All variables have mean monthly values for the period 1901-1995, several have data as recent as 1998, and further data will be added by the data originators. In constructing the monthly grids, the authors used an anomaly approach that attempts to maximize station data in space and time (New et al. 2000b). In this technique, grids of monthly historic anomalies are derived in relation to a standard normal period. Station measurement data for the years 1961-1990 were extracted from the monthly data holdings of the Climatic Research Unit and the Global Historic Climatology Network (GHCN) and used in constructing the normal period (New et al. 1999). The anomaly grids were then combined with high-resolution mean monthly climatology to arrive at fields of estimated historical monthly surface climate. Data are in ASCII GRID format for ArcInfo. Information on creating this LBA subset is available in ftp://daac.ornl.gov/data/lba/physical_climate/GIS_EastAngliaClimateMonthly/comp/eastanglia_readme.pdf.Data users are encouraged to see the companion file New et al. (2000) for a complete description of this technique and potential applications and limitations of the data set. For additional information, refer to the IPCC Data Distribution Centre.To access the complete year-by-year monthly data set or data more recent than posted here, users may make a request with the Climate Impacts LINK Project at the Climatic Research Unit (e-mail: d.viner@uea.ac.uk; web site: www.cru.uea.ac.uk/link). LBA was a cooperative international research initiative led by Brazil. NASA was a lead sponsor for several experiments. LBA was designed to create the new knowledge needed to understand the climatological, ecological, biogeochemical, and hydrological functioning of Amazonia; the impact of land use change on these functions; and the interactions between Amazonia and the Earth system. More information about LBA can be found at http://www.daac.ornl.gov/LBA/misc_amazon.html.; abstract: ABSTRACT: This data set is a subset of "Global Monthly Climatology for the 20th Century (New et al.)" (2000a). This subset characterizes mean monthly surface climate over the study area of the Large Scale Biosphere-Atmosphere Experiment in Amazonia (LBA) in South America (i.e., latitude 10 N to 25 S, longitude 30 to 85 W) during nearly all of the 20th Century. The data are gridded at 0.5-degree latitude/longitude resolution and include seven variables: precipitation, mean temperature, diurnal temperature range, wet-day frequency, vapour pressure, cloud cover, and ground-frost frequency. All variables have mean monthly values for the period 1901-1995, several have data as recent as 1998, and further data will be added by the data originators. In constructing the monthly grids, the authors used an anomaly approach that attempts to maximize station data in space and time (New et al. 2000b). In this technique, grids of monthly historic anomalies are derived in relation to a standard normal period. Station measurement data for the years 1961-1990 were extracted from the monthly data holdings of the Climatic Research Unit and the Global Historic Climatology Network (GHCN) and used in constructing the normal period (New et al. 1999). The anomaly grids were then combined with high-resolution mean monthly climatology to arrive at fields of estimated historical monthly surface climate. Data are in ASCII GRID format for ArcInfo. Information on creating this LBA subset is available in ftp://daac.ornl.gov/data/lba/physical_climate/GIS_EastAngliaClimateMonthly/comp/eastanglia_readme.pdf.Data users are encouraged to see the companion file New et al. (2000) for a complete description of this technique and potential applications and limitations of the data set. For additional information, refer to the IPCC Data Distribution Centre.To access the complete year-by-year monthly data set or data more recent than posted here, users may make a request with the Climate Impacts LINK Project at the Climatic Research Unit (e-mail: d.viner@uea.ac.uk; web site: www.cru.uea.ac.uk/link). LBA was a cooperative international research initiative led by Brazil. NASA was a lead sponsor for several experiments. LBA was designed to create the new knowledge needed to understand the climatological, ecological, biogeochemical, and hydrological functioning of Amazonia; the impact of land use change on these functions; and the interactions between Amazonia and the Earth system. More information about LBA can be found at http://www.daac.ornl.gov/LBA/misc_amazon.html.
Open Government Licence 3.0http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/
License information was derived automatically
The CRU CY version 3.24 dataset consists of country averages at a monthly, seasonal and annual frequency, for ten climate variables, including cloud cover, diurnal temperature range, frost day frequency, precipitation, daily mean temperature, monthly average daily maximum and minimum temperature, vapour pressure and Potential Evapo-transpiration.
This dataset was produced in 2016 by the Climatic Research Unit (CRU) at the University of East Anglia. The data are available as text files with the extension '.per' and can be opened by most text editors.
Spatial averages are calculated using area-weighted means. CRU CY3.24 is derived directly from the CRU TS3.24 dataset. CRU CY version 3.24 spans the period 1901-2015 for 289 countries.
To understand the CRU-CY3.24 dataset, it is important to understand the construction and limitations of the underlying dataset, CRU TS3.24. It is therefore recommended that all users read the Harris et al, 2014 paper listed in the online documentation on this record.
CRU CY data are available for download to all CEDA users.
Click here to be taken directly to the ClimRR PortalClimate change is increasing the complexity, intensity, and frequency of disasters. Understanding future climate conditions in cities and towns across the United States is necessary to prepare for future climate realities. To address this requirement, ClimRR — the Climate Risk and Resilience Portal — empowers individuals, governments, and organizations to examine simulated future climate conditions at mid- and end-of-century for a range of climate perils. ClimRR was developed by the Center for Climate Resilience and Decision Science (CCRDS) at Argonne National Laboratory in collaboration with AT&T and the United States Department of Homeland Security’s Federal Emergency Management Agency (FEMA).Example: Climate adaptation planning starts with understanding the types of climate-related hazards and risks a community will likely face in the future. ClimRR helps analysts and planners gain an understanding of local-scale future climate conditions and extremes for wind, precipitation and temperature for most of the United StatesPRECIPITATIONEach climate model estimates an amount of precipitation (whether rain, snow, sleet, or ice) that occurs every 3 hours across the entire modeled time period (i.e., every 3 hours, of every day, for all modeled years). These 3-hour precipitation estimates can be used to calculate the total precipitation over a designated period of time, ranging from daily to annually. Argonne calculated total annual precipitation by adding all 3-hour precipitation estimates for a given year (e.g., 2045) within a given time period/scenario (e.g., mid-century RCP4.5) and for a given climate model (e.g., CCSM), which produced the total annual precipitation for that scenario's model year, such as CCSM's estimate of annual precipitation in 2045 under climate scenario RCP4.5. This process was repeated for each year within a given time period/scenario (e.g., 2046, 2047, and so forth) and across all three climate models (CCSM, GFDL, and HadGEM), producing a total of 30 estimates of total annual precipitation for a given time period/scenario. The average of these values was taken to produce the ensemble mean of the total annual precipitation (in inches) for each time period/scenario. CONSECUTIVE DAYS WITH NO PRECIPITATIONEach climate model estimates an amount of precipitation (whether rain, snow, sleet, or ice) that occurs every 3 hours across the entire modeled time period (i.e., every 3 hours, of every day, for all modeled years). These 3-hour precipitation estimates were used to calculate daily precipitation quantities by adding all 8 precipitation readings for each day of a given year (e.g., 2045) within a given time period/scenario (e.g., mid-century RCP4.5) and for a given climate model (e.g., CCSM). This process produced the total daily precipitation for every day in a scenario's model year, such as CCSM's daily estimates of total precipitation for the year 2045 under climate scenario RCP4.5. Using this information, Argonne identified the greatest number of consecutive days in which no precipitation occurred (i.e., the total daily precipitation quantity equaled zero) for that scenario's model year (e.g., for the year 2045 under scenario RCP4.5, the highest number of consecutive days without any precipitation was X). This process was repeated for each year within a given time period/scenario (e.g., 2046, 2047, and so forth) and across all three climate models (CCSM, GFDL, and HadGEM) producing 10 yearly values for each model, with each value representing the longest consecutive span with no precipitation for that year. Of the 10 yearly values for each climate model, the maximum value was selected (e.g., the decadal maximum). This resulted in 3 values for the longest consecutive number of days without precipitation for each time period/scenario, with one value for each climate model’s 10 years of data. The average of these maximum of the maxima was then taken to produce the ensemble mean of the decade’s highest number of consecutive days without precipitation in a single year. CLIMATE SCENARIOSClimate scenarios are the set of conditions used as inputs to climate models to represent estimates of future greenhouse gas (GHG) concentrations in the atmosphere. Climate models then evaluate how these GHG concentrations affect future (projected) climate. The data layers presented in this portal include results from two selected future climate scenarios for two 10‐year periods, and a historical 10‐year period for comparison:RCP4.5: Representative Concentration Pathway 4.5, with results provided for a mid-century period (2045 to 2054) and end-of-century period (2085 to 2094). In this scenario, human GHG emissions peak around 2040, then decline.RCP8.5: Representative Concentration Pathway 8.5, with results provided for a mid-century period (2045 to 2054) and end-of-century period (2085 to 2094). In this scenario, human GHG emissions continue to rise throughout the 21st century.Historical: Climate model is based on historical conditions, with results for 1995 to 2004. DOWNSCALED CLIMATE MODELSA global climate model is a complex mathematical representation of the major climate system components (atmosphere, land surface, ocean, and sea ice), and their interactions. These models project climatic conditions at frequent intervals over long periods of time (e.g., every 3 hours for the next 50-100 years), often with the purpose of evaluating how one or more GHG scenarios (such as RCP4.5 or RCP8.5) will impact future climate. Most global climate models project patterns at relatively coarse spatial resolutions, using grid-cells ranging from 100km2 to 200km2.The climate data presented in this portal has been downscaled to a higher spatial resolution (12km2) in order to fill a growing need for risk analysis and resilience planning at the local level. The process used to downscale global climate model data in this online portal is called dynamical downscaling. This method applies the pre-existing outputs of a global climate model as inputs to a separate, high-resolution regional climate model throughout its simulation. Dynamical downscaling accounts for the physical processes and natural features of a region, as well as the complex interaction between these elements and global dynamics under a climate scenario.Argonne’s dynamical downscaling employs the Weather Research and Forecasting (WRF) model, which is a regional weather model for North America developed by the National Center for Atmospheric Research. Argonne then conducted three separate regional modeling runs applying input data from a different global climate model for each simulation. These global climate models are:CCSM: The Community Climate System Model (Version 4) is a coupled global climate model developed by the University Corporation for Atmospheric Research with funding from the National Science Foundation, the Department of Energy, and the National Aeronautics and Space Administration. It is comprised of atmospheric, land surface, ocean, and sea ice submodels that run simultaneously with a central coupler component.GFDL: The Geophysical Fluid Dynamics Laboratory at the National Oceanic and Atmospheric Administration developed the Earth System Model Version 2G (note: the general convention, which we use, is to use the Laboratory's abbreviation to identify this model). It includes an atmospheric circulation model and an oceanic circulation model, and takes into account land, sea ice, and iceberg dynamics.HadGEM: The United Kingdom’s Met Office developed the Hadley Global Environment Model 2—Earth System. It is used for both operational weather forecasting and climate research, and includes coupled atmosphere‐ocean analysis and an earth system component that includes dynamic vegetation, ocean biology, and atmospheric chemistry.Regional modeling with the global climate model outputs (i.e., dynamical downscaling) began by conducting a validation study, in which the WRF model is run using inputs from the global climate models over a historical period (in this case, 1995-2004). This 'backcasting' allows for an assessment of the WRF model's ability to reproduce observed local climate trends. Once validated, Argonne then supplied each individual global climate model's outputs (CCSM, GFDL, and HadGEM) for each climate scenario (mid-century RCP4.5, mid-century RCP8.5, end-of-century RCP4.5, and end-of-century RCP8.5) to the WRF regional model, producing three different downscaled projections of future climate conditions for each scenario, along with downscaled historical data for each global climate model. ENSEMBLE MEANSAll data layers represent a variable along with its associated time period and climate scenario (e.g., mid-century RCP4.5). Each time period comprises one decade's worth of information: the historical (1995 – 2004), the mid-century (2045 – 2054), or the end-of-century (2085 – 2094). For each time period/climate scenario, the WRF model is run with each of the three global climate model outputs, producing three individual decades of weather data for each time period. In other words, Argonne's climate modeling produces 30 years of climate data for each decadal time period/climate scenario. By using the outputs from three different global climate models, rather than a single model, Argonne’s climate projections better account for the internal uncertainty associated with any single model. Each year's worth of data includes weather outputs for every 3 hours, or 8 modeled outputs per day. While this allows for a high degree of granularity in assessing future climate trends, it can also lead to a number of different ways to analyze this data; however, there are several important base methodologies shared across all variables presented in this portal. Most variables are presented as annual or seasonal averages of daily observations; however, each annual/seasonal average
Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
The stochastic climate data include 10,000 replicates of 130-yr daily data sets of rainfall and potential evapotranspiration generated using observed data sets without and with combined climate data. This work has been undertaken by researchers at the University of Newcastle and used in modelling for Greater Sydney Water Strategy.\r \r This particular Asset (069001-070000) houses Silo Station IDs:\r \r -\t069006 - BETTOWYND (CONDRY)\r -\t069010 - BRAIDWOOD (WALLACE STREET)\r -\t069016 - MILTON (SARAH CLAYDON VILLAGE)\r -\t069041 - CHARLEYONG\r -\t069049 - NERRIGA COMPOSITE\r -\t069062 - SNOWBALL\r -\t069132 - BRAIDWOOD RACECOURSE\r \r -----------------------------------\r \r Note: If you would like to ask a question, make any suggestions, or tell us how you are using this dataset, please visit the NSW Water Hub which has an online forum you can join.
Climate Data Online is a collection of climatic data that offers public access and consumption via discovery and ordering services. The data available through CDO is available at no charge and can be viewed online or ordered and delivered to your email inbox.