Hydrological and meteorological information can help inform the conditions and risk factors related to the environment and their inhabitants. Due to the limitations of observation sampling, gridded data sets provide the modeled information for areas where data collection are infeasible using observations collected and known process relations. Although available, data users are faced with barriers to use, challenges like how to access, acquire, then analyze data for small watershed areas, when these datasets were produced for large, continental scale processes. In this tutorial, we introduce Observatory for Gridded Hydrometeorology (OGH) to resolve such hurdles in a use-case that incorporates NetCDF gridded data sets processes developed to interpret the findings and apply secondary modeling frameworks (landlab).
LEARNING OBJECTIVES - Familiarize with data management, metadata management, and analyses with gridded data - Inspecting and problem solving with Python libraries - Explore data architecture and processes - Learn about OGH Python Library - Discuss conceptual data engineering and science operations
Use-case operations: 1. Prepare computing environment 2. Get list of grid cells 3. NetCDF retrieval and clipping to a spatial extent 4. Extract NetCDF metadata and convert NetCDFs to 1D ASCII time-series files 5. Visualize the average monthly total precipitations 6. Apply summary values as modeling inputs 7. Visualize modeling outputs 8. Save results in a new HydroShare resource
For inquiries, issues, or contribute to the developments, please refer to https://github.com/freshwater-initiative/Observatory
The netCDF (network Common Data Form) file format is increasingly used to store and manage multidimensional scientific data. Although netCDF files offer multiple advanced features and functionality in their own right, workflows that involve netCDF files can be intimidating for new users due to their binary format. There are several methods to manage netCDF file data including via libraries in programming languages such as Fortran or Python. However these methods require knowledge of the programming languages as a prerequisite. Other user-interface applications such as Panoply, NetCDF Explorer, or ArcGIS have functionality to access, view, and in some cases modify or create netCDF files. Another tool to manage netCDF files is the netCDF operators (NCO). NCO is a set of command line tools developed and maintained by the original creators of the netCDF file, the Unidata program at the University Corporation for Atmospheric Research. As such NCO tools are highly optimized and flexible, allowing a myriad of netCDF workflows. This html-based tutorial aims to demystify basic functionalities and syntax of NCO commands that are useful for analysing netCDF scientific data. The tutorial contains multiple examples that focus on scientific data (e.g. climatic measurements or model output) analysis including code snippets, explanations, and figures. Specifically, part 1 covers basic concatenation and averaging of single and ensemble record variables using the ncrcat, ncecat, ncra, and ncea commands respectively. Part 2 builds on part 1 and focuses on basic and advanced uses of the weighted-averaging command ncwa. Examples of other common NCO commands including breif desctiptions on how to download or install the package, and tools for netCDF visualization are also included in the tutorial. Although the tutorial is not in depth, as it does not explicitly cover all the NCO commands nor all of their options, it is a good starting point as many other NCO commands follow similar syntax and conventions.
The Global Forecast System (GFS) is a weather forecast model produced by the National Centers for Environmental Prediction (NCEP). Dozens of atmospheric and land-soil variables are available through this dataset, from temperatures, winds, and precipitation to soil moisture and atmospheric ozone concentration. The GFS data files stored here can be immediately used for OAR/ARL’s NOAA-EPA Atmosphere-Chemistry Coupler Cloud (NACC-Cloud) tool, and are in a Network Common Data Form (netCDF), which is a very common format used across the scientific community. These particular GFS files contain a comprehensive number of global atmosphere/land variables at a relatively high spatiotemporal resolution (approximately 13x13 km horizontal, vertical resolution of 127 levels, and hourly), are not only necessary for the NACC-Cloud tool to adequately drive community air quality applications (e.g., U.S. EPA’s Community Multiscale Air Quality model; https://www.epa.gov/cmaq), but can be very useful for a myriad of other applications in the Earth system modeling communities (e.g., atmosphere, hydrosphere, pedosphere, etc.). While many other data file and record formats are indeed available for Earth system and climate research (e.g., GRIB, HDF, GeoTIFF), the netCDF files here are advantageous to the larger community because of the comprehensive, high spatiotemporal information they contain, and because they are more scalable, appendable, shareable, self-describing, and community-friendly (i.e., many tools available to the community of users). Out of the four operational GFS forecast cycles per day (at 00Z, 06Z, 12Z and 18Z) this particular netCDF dataset is updated daily (/inputs/yyyymmdd/) for the 12Z cycle and includes 24-hr output for both 2D (gfs.t12z.sfcf$0hh.nc) and 3D variables (gfs.t12z.atmf$0hh.nc).
Also available are netCDF formatted Global Land Surface Datasets (GLSDs) developed by Hung et al. (2024). The GLSDs are based on numerous satellite products, and have been gridded to match the GFS spatial resolution (~13x13 km). These GLSDs contain vegetation canopy data (e.g., land surface type, vegetation clumping index, leaf area index, vegetative canopy height, and green vegetation fraction) that are supplemental to and can be combined with the GFS meteorological netCDF data for various applications, including NOAA-ARL's canopy-app. The canopy data variables are climatological, based on satellite data from the year 2020, combined with GFS meteorology for the year 2022, and are created at a daily temporal resolution (/inputs/geo-files/gfs.canopy.t12z.2022mmdd.sfcf000.global.nc)
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The NetCDF4 files in this repository were converted from their original source formats using GMTED2010-netcdf scripts.
Lemoine, F. G., S. C. Kenyon, J. K. Factor, R.G. Trimmer, N. K. Pavlis, D. S. Chinn, C. M. Cox, S. M. Klosko, S. B. Luthcke, M. H. Torrence, Y. M. Wang, R. G. Williamson, E. C. Pavlis, R. H. Rapp and T. R. Olson (1998). The Development of the Joint NASA GSFC and the National Imagery and Mapping Agency (NIMA) Geopotential Model EGM96. NASA/TP-1998-206861, July 1998. https://ntrs.nasa.gov/citations/19980218814
Pavlis, N. K., Holmes, S. A., Kenyon, S. C., & Factor, J. K. (2012). The development and evaluation of the Earth Gravitational Model 2008 (EGM2008). Journal of Geophysical Research: Solid Earth, 117(B4), 2011JB008916. https://doi.org/10.1029/2011JB008916
Danielson, J. J. and D. B. Gesch (2011). Global multi-resolution terrain elevation data 2010 (GMTED2010). U.S. Geologic Survey, Open-File Report 2011-1073, https://doi.org/10.3133/ofr20111073
The Terra Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Global Digital Elevation Model (GDEM) Version 3 (ASTGTM) provides a global digital elevation model (DEM) of land areas on Earth at a spatial resolution of 1 arc second (approximately 30 meter horizontal posting at the equator).The development of the ASTER GDEM data products is a collaborative effort between National Aeronautics and Space Administration (NASA) and Japan's Ministry of Economy, Trade, and Industry (METI). The ASTER GDEM data products are created by the Sensor Information Laboratory Corporation (SILC) in Tokyo. The ASTER GDEM Version 3 data product was created from the automated processing of the entire ASTER Level 1A archive of scenes acquired between March 1, 2000, and November 30, 2013. Stereo correlation was used to produce over one million individual scene based ASTER DEMs, to which cloud masking was applied. All cloud screened DEMs and non-cloud screened DEMs were stacked. Residual bad values and outliers were removed. In areas with limited data stacking, several existing reference DEMs were used to supplement ASTER data to correct for residual anomalies. Selected data were averaged to create final pixel values before partitioning the data into 1 degree latitude by 1 degree longitude tiles with a one pixel overlap. To correct elevation values of water body surfaces, the ASTER Global Water Bodies Database (ASTWBD) Version 1 data product was also generated. The geographic coverage of the ASTER GDEM extends from 83° North to 83° South. Each tile is distributed in both a Cloud Optimized GeoTIFF (COG) and NetCDF4 format through NASA Earthdata Search and in standard GeoTIFF format through the LP DAAC Data Pool. Data are projected on the 1984 World Geodetic System (WGS84)/1996 Earth Gravitational Model (EGM96) geoid. Each of the 22,912 tiles in the collection contain at least 0.01% land area. Provided in the ASTER GDEM product are layers for DEM and number of scenes (NUM). The NUM layer indicates the number of scenes that were processed for each pixel and the source of the data.While the ASTER GDEM Version 3 data products offer substantial improvements over Version 2, users are advised that the products still may contain anomalies and artifacts that will reduce its usability for certain applications. Known Issues ASTER GDEM Version 3 tiles overlap by one pixel to the north, south, east, and west of the tile perimeter. In most cases the overlapping edge pixels have identical pixel values, but it is possible that in some instances values will differ. * ASTER GDEM Version 3 is considered to be void free except for Greenland and Antarctica. Users are reminded that because there are known inaccuracies and artifacts in the dataset, to use the product with awareness of these limitations. The data are provided "as is" and neither NASA nor METI/Earth Resources Satellite Data Analysis Center (ERSDAC) will be responsible for any damages resulting from use of the data.Improvements/Changes from Previous Version Expansion of acquisition coverage to increase the amount of cloud free input scenes from about 1.5 million in Version 2 to about 1.88 million scenes in Version 3. Separation of rivers from lakes in the water body processing.* Minimum water body detection size decreased from 1 square kilometer (km²) to 0.2 km².
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
this python2.7 code converts an argo netcdf profile file into a wmo-bufr 3-15-003 file. it handles core temperature and salinity profiles only.the binary universal form for the representation of meteorological data (bufr) is a binary data format maintained by the world meteorological organization (wmo).the main script is argonetcdftobufr.py. it calls other python scripts (in the attached zip folder) as well as some standard python packages.you call it from the linux command line like this: python2.7 argonetcdftobufr.py -i "path/to/netcdf_file/input_netcdf_filename.nc" -o "path/to/output_destination/output_binary_filename.dat"
Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
One of the clearest manifestations of ongoing global climate change is the dramatic retreat and thinning of the Arctic sea-ice cover. While all state-of-the-art climate models consistently reproduce the sign of these changes, they largely disagree on their magnitude, the reasons for which remain contentious. As such, consensual methods to reduce uncertainty in projections are lacking. Here, using the CMIP5 ensemble, we propose a process-oriented approach to revisit this issue. We show that inter-model differences in sea-ice loss and, more generally, in simulated sea-ice variability, can be traced to differences in the simulation of seasonal growth and melt. The way these processes are simulated is relatively independent of the complexity of the sea-ice model used, but rather a strong function of the background thickness. The larger role played by thermodynamic processes as sea ice thins further suggests the recent and projected reductions in sea-ice thickness induce a transition of the Arctic towards a state with enhanced volume seasonality but reduced interannual volume variability and persistence, before summer ice-free conditions eventually occur. These results prompt modelling groups to focus their priorities on the reduction of sea-ice thickness biases.
Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
The role of Pre- and Protohistoric anthropogenic land cover changes needs to be quantified i) to establish a baseline for comparison with current human impact on the environment and ii) to separate it from naturally occurring changes in our environment. Results are presented from the simple, adaptation-driven, spatially explicit Global Land Use and technological Evolution Simulator (GLUES) for pre-Bronze age demographic, technological and economic change. Using scaling parameters from the History Database of the Global Environment as well as GLUES-simulated population density and subsistence style, the land requirement for growing crops is estimated. The intrusion of cropland into potentially forested areas is translated into carbon loss due to deforestation with the dynamic global vegetation model VECODE. The land demand in important Prehistoric growth areas - converted from mostly forested areas - led to large-scale regional (country size) deforestation of up to 11% of the potential forest. In total, 29 Gt carbon were lost from global forests between 10 000 BC and 2000 BC and were replaced by crops; this value is consistent with other estimates of Prehistoric deforestation. The generation of realistic (agri-)cultural development trajectories at a regional resolution is a major strength of GLUES. Most of the pre-Bronze age deforestation is simulated in a broad farming belt from Central Europe via India to China. Regional carbon loss is, e.g., 5 Gt in Europe and the Mediterranean, 6 Gt on the Indian subcontinent, 18 Gt in East and Southeast Asia, or 2.3 Gt in subsaharan Africa.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The netcdf data files and python files used to make the figures, and the movies provided in the supplemental material in Finley et al. Netcdf software is available from https://www.unidata.ucar.edu/software/netcdf/. Figures 1-4 and movies were made with VisIt, which is available from https://visit-dav.github.io/visit-website/.
We implemented automated workflows using Jupyter notebooks for each state. The GIS processing, crucial for merging, extracting, and projecting GeoTIFF data, was performed using ArcPy—a Python package for geographic data analysis, conversion, and management within ArcGIS (Toms, 2015). After generating state-scale LES (large extent spatial) datasets in GeoTIFF format, we utilized the xarray and rioxarray Python packages to convert GeoTIFF to NetCDF. Xarray is a Python package to work with multi-dimensional arrays and rioxarray is rasterio xarray extension. Rasterio is a Python library to read and write GeoTIFF and other raster formats. Xarray facilitated data manipulation and metadata addition in the NetCDF file, while rioxarray was used to save GeoTIFF as NetCDF. These procedures resulted in the creation of three HydroShare resources (HS 3, HS 4 and HS 5) for sharing state-scale LES datasets. Notably, due to licensing constraints with ArcGIS Pro, a commercial GIS software, the Jupyter notebook development was undertaken on a Windows OS.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Geoscientists now live in a world with an exponential growth in digital data and methods.
Climate change studies usually describe computational methods informally. Climate scientists seek to
share their information, the justification of reproducible research has received increasing attention in
geosciences. To have it in an open-source format makes it easier to interchange not only with fellow
scientists but also a variety of sources including funders, publishers, and journalists. R is a open-source
computer language powerful and highly extensible that can promotes reproductive science techniques in a
easier way. R is highly accessible for non-computational scientists when coupled with packages like
‘raster', ‘netcdf', ´rgdal`and ‘rasterVis', R enables scientists to make sense of their data and to carry out
complex data analysis. In this paper we have assessed the power of R language for manipulating climate
data from a huge dataset: the Coupled Model Intercomparison Project Phase 5 (CMIP5). Moreover we
have proposed an example of best practices to handle model ensembles. This is the first study to our
knowledge to promote best practices for CMIP5 ensemble. The NetCDF data accessible to R via raster
package capabilities provides efficient access to the multi-model, with crucial applications in climate
change research. In recent years more than 100 peer-reviewed scientific publications have used the
CMIP5 data sets. We envision that in the near future (5-10 years), scientists will use radically new tools
to author papers and disseminate information about the process and products of their research.
PACE NetCDF images with 8 days.
"PACE's data will help us better understand how the ocean and atmosphere exchange carbon dioxide. In addition, it will reveal how aerosols might fuel phytoplankton growth in the surface ocean. Novel uses of PACE data will benefit our economy and society. For example, it will help identify the extent and duration of harmful algal blooms. PACE will extend and expand NASA's long-term observations of our living planet. By doing so, it will take Earth's pulse in new ways for decades to come."
PACE NetCFD images dataset: - source: https://oceancolor.gsfc.nasa.gov/l3/order/ - start date: 2024-03-05 - end date: 2024-10-05 - sensor: PACE-OCI - product: Phytoplankton Carbon
All rights, and licenses go to the original data provider: NASA
This data was collected during NASA space apps challenge 2024
2013-Nov SuperDARN radar data in netCDF format. These files were produced using versions 2.5 and 3.0 of the public FitACF algorithm, using the AACGM v2 coordinate system. Cite this dataset if using our data in a publication.The RST is available here: https://github.com/SuperDARN/rstThe research enabled by SuperDARN is due to the efforts of teams of scientists and engineers working in many countries to build and operate radars, process data and provide access, develop and improve data products, and assist users in interpretation. Users of SuperDARN data and data products are asked to acknowledge this support in presentations and publications. A brief statement on how to acknowledge use of SuperDARN data is provided below.Users are also asked to consult with a SuperDARN PI prior to submission of work intended for publication. A listing of radars and PIs with contact information can be found here: (SuperDARN Radar Overview)Recommended form of acknowledgement for the use of SuperDARN data:'The authors acknowledge the use of SuperDARN data. SuperDARN is a collection of radars funded by national scientific funding agencies of Australia, Canada, China, France, Italy, Japan, Norway, South Africa, United Kingdom and the United States of America.'
FLASH_SSF_NOAA20-FM6-VIIRS_Version1A data are near real-time CERES observed TOA fluxes, clouds, and parameterized surface fluxes, not officially calibrated. The Fast Longwave and SHortwave Flux (FLASHFlux) data are a product line of the Clouds and the Earth's Radiant Energy Systems (CERES) project designed for processing and release of top-of-atmosphere (TOA) and surface radiative fluxes within one week of CERES instrument measurement. The FLASHFlux data product is a rapid-release product based on the algorithms developed for and data collected by the CERES project. CERES is currently producing world-class climate data products derived from measurements taken aboard NASA's Terra and Aqua spacecraft. While of exceptional fidelity, these data products require considerable processing time to assure quality, verify accuracy, and assess precision. The result is that CERES data are typically released up to six months after acquiring the initial measurements. Such delays are of little consequence for climate studies, especially considering the improved quality of the released data products. Thus, FLASHFlux products are not intended to achieve climate quality.FLASHFlux data products were envisioned as a resource whereby CERES data could be provided to the community within a week of the initial measurements, with some calibration accuracy requirements relaxed to gain speed. The Single Scanner Footprint TOA/Surface Fluxes and Clouds (SSF) product contains one hour of instantaneous Fast Longwave And SHortwave Fluxes (FLASHFlux) data for a single Clouds and the Earth's Radiant Energy Systems (CERES) scanner instrument. The SSF combines instantaneous CERES data with scene information from a higher-resolution imager such as the Visible Infrared Imaging Radiometer Suite (VIIRS) on the NOAA-20 satellite and meteorological and ozone information from The Goddard Earth Observing System GEOS-5 FP-IT Atmospheric Data Assimilation System (GEOS-5 ADAS). Scene identification and cloud properties are defined at the higher imager resolution, and these data are averaged over the larger CERES footprint. For each CERES footprint, the SSF contains Top-of-Atmosphere fluxes in SW, LW, and incoming NET, surface fluxes using the Langley parameterized shortwave and longwave algorithms, and cloud information.CERES is a key Earth Observing System (EOS) program component. The CERES instruments provide radiometric measurements of the Earth's atmosphere from three broadband channels. The CERES mission is a follow-up to the successful Earth Radiation Budget Experiment (ERBE) mission. The first CERES instrument (PFM) was launched on November 27, 1997, as part of the Tropical Rainfall Measuring Mission (TRMM). Two CERES instruments (FM1 and FM2) were launched into polar orbit on board the EOS flagship Terra on December 18, 1999. Two additional CERES instruments (FM3 and FM4) were launched on board EOS Aqua on May 4, 2002. CERES instrument Flight Model 5 (FM5) was launched on board the Suomi National Polar-orbiting Partnership (NPP) satellite on October 28, 2011. The latest CERES instrument (FM6) was launched on board NOAA-20 on November 18, 2017.
The ASTER Global Digital Elevation Model (GDEM) Version 3 (ASTGTM) provides a global digital elevation model (DEM) of land areas on Earth at a spatial resolution of 1 arc second (approximately 30 meter horizontal posting at the equator).The development of the ASTER GDEM data products is a collaborative effort between National Aeronautics and Space Administration (NASA) and Japan’s Ministry of Economy, Trade, and Industry (METI). The ASTER GDEM data products are created by the Sensor Information Laboratory Corporation (SILC) in Tokyo. The ASTER GDEM Version 3 data product was created from the automated processing of the entire ASTER Level 1A (https://doi.org/10.5067/ASTER/AST_L1A.003) archive of scenes acquired between March 1, 2000, and November 30, 2013. Stereo correlation was used to produce over one million individual scene based ASTER DEMs, to which cloud masking was applied. All cloud screened DEMs and non-cloud screened DEMs were stacked. Residual bad values and outliers were removed. In areas with limited data stacking, several existing reference DEMs were used to supplement ASTER data to correct for residual anomalies. Selected data were averaged to create final pixel values before partitioning the data into 1 degree latitude by 1 degree longitude tiles with a one pixel overlap. To correct elevation values of water body surfaces, the ASTER Global Water Bodies Database (ASTWBD) (https://doi.org/10.5067/ASTER/ASTWBD.001) Version 1 data product was also generated. The geographic coverage of the ASTER GDEM extends from 83° North to 83° South. Each tile is distributed in NetCDF format and projected on the 1984 World Geodetic System (WGS84)/1996 Earth Gravitational Model (EGM96) geoid. Each of the 22,912 tiles in the collection contain at least 0.01% land area. Each ASTGTM_NC data product contains a DEM file, which provides elevation information. The corresponding ASTGTM_NUMNC file indicates the number of scenes that were processed for each pixel and the source of the data.While the ASTER GDEM Version 3 data products offer substantial improvements over Version 2, users are advised that the products still may contain anomalies and artifacts that will reduce its usability for certain applications. Improvements/Changes from Previous Versions • Expansion of acquisition coverage to increase the amount of cloud-free input scenes from about 1.5 million in Version 2 to about 1.88 million scenes in Version 3.• Separation of rivers from lakes in the water body processing. • Minimum water body detection size decreased from 1 km2 to 0.2 km2.
Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
African dust outbreaks are the result of complex interactions between the land, atmosphere, and oceans, and only recently has a large body of work begun to emerge that aims to understand the controls on-and impacts of-African dust. At the same time, long-term records of dust outbreaks are either inferred from visibility data from weather stations or confined to a few in situ observational sites. Satellites provide the best opportunity for studying the large-scale characteristics of dust storms, but reliable records of dust are generally on the scale of a decade or less. Here the authors develop a simple model for using modern and historical data from meteorological satellites, in conjunction with a proxy record for atmospheric dust, to extend satellite-retrieved dust optical depth over the northern tropical Atlantic Ocean from 1955 to 2008. The resultant 54-yr record of dust has a spatial resolution of 1° and a monthly temporal resolution. From analysis of the historical dust data, monthly tropical northern Atlantic dust cover is bimodal, has a strong annual cycle, peaked in the early 1980s, and shows minimums in dustiness during the beginning and end of the record. These dust optical depth estimates are used to calculate radiative forcing and heating rates from the surface through the top of the atmosphere over the last half century. Radiative transfer simulations show a large net negative dust forcing from the surface through the top of the atmosphere, also with a distinct annual cycle, and mean tropical Atlantic monthly values of the surface forcing range from -3 to -9 W/m**2. Since the surface forcing is roughly a factor of 3 larger in magnitude than the top-of-the-atmosphere forcing, there is also a positive heating rate of the midtroposphere by dust.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Wallops SuperDARN radar data collected during the 2024-04-08 total solar eclipse. These files were produced using versions 2.5 and 3.0 of the public FitACF algorithm, using the AACGM v2 coordinate system. Cite this dataset if using our data in a publication.
The RST is available here: https://github.com/SuperDARN/rst
The research enabled by SuperDARN is due to the efforts of teams of scientists and engineers working in many countries to build and operate radars, process data and provide access, develop and improve data products, and assist users in interpretation. Users of SuperDARN data and data products are asked to acknowledge this support in presentations and publications. A brief statement on how to acknowledge use of SuperDARN data is provided below.
Users are also asked to consult with a SuperDARN PI prior to submission of work intended for publication. A listing of radars and PIs with contact information can be found here: (SuperDARN Radar Overview)
Recommended form of acknowledgement for the use of SuperDARN data:
'The authors acknowledge the use of SuperDARN data. SuperDARN is a collection of radars funded by national scientific funding agencies of Australia, Canada, China, France, Italy, Japan, Norway, South Africa, United Kingdom and the United States of America.'
This dataset contains SABL (Scanning Aerosol Backscatter Lidar) data obtained aboard the NCAR/NSF C-130 aircraft during the Gulf Of Tehuantepec Experiment (GOTEX), Coupled Development of Ocean Waves and Boundary Layers (aka OceanWaves) project. The data are in netCDF format.
Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
In this study we present first results of a new model development, ECHAM5-JSBACH-wiso, where we have incorporated the stable water isotopes H218O and HDO as tracers in the hydrological cycle of the coupled atmosphere-land surface model ECHAM5-JSBACH. The ECHAM5-JSBACH-wiso model was run under present-day climate conditions at two different resolutions (T31L19, T63L31). A comparison between ECHAM5-JSBACH-wiso and ECHAM5-wiso shows that the coupling has a strong impact on the simulated temperature and soil wetness. Caused by these changes of temperature and the hydrological cycle, the d18O in precipitation also shows variations from -4 permil up to 4 permil. One of the strongest anomalies is shown over northeast Asia where, due to an increase of temperature, the d18O in precipitation increases as well. In order to analyze the sensitivity of the fractionation processes over land, we compare a set of simulations with various implementations of these processes over the land surface. The simulations allow us to distinguish between no fractionation, fractionation included in the evaporation flux (from bare soil) and also fractionation included in both evaporation and transpiration (from water transport through plants) fluxes. While the isotopic composition of the soil water may change for d18O by up to +8 permil:, the simulated d18O in precipitation shows only slight differences on the order of ±1 permil. The simulated isotopic composition of precipitation fits well with the available observations from the GNIP (Global Network of Isotopes in Precipitation) database.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
SAMI3 (Sami3 is Also a Model of the Ionosphere) is a seamless, three-dimensional, physics-based model of the ionosphere (Huba et al, 2008). It is based on SAMI2, a two-dimensional model of the ionosphere (Huba et al., 2000).
SAMI3 models the plasma and chemical evolution of seven ion species (H⁺, He⁺, N⁺, O⁺, N⁺₂, NO⁺ and O⁺₂). The temperature equation is solved for three ion species (H⁺, He⁺ and O⁺) and for the electrons. Ion inertia is included in the ion momentum equation for motion along the geomagnetic field. This is important in modeling the topside ionosphere and plasmasphere where the plasma becomes collisionless.
SAMI3 includes 21 chemical reactions and radiative recombination, and uses a nonorthogonal, nonuniform, fixed grid for the magnetic latitude range +/- 89 degrees..
Drivers
Neutral composition, temperature, and winds: NRLMSISE00 (Picone et al., 2002) and HWM14 (Drob et al., 2015).
Solar radiation: Flare Irradiance Spectral Model version 2 (FISM v2)
Magnetic field: Richmond apex model [Richmond, 1995].
Neutral wind dynamo electric field: Determined from the solution of a 2D potential equation [Huba et at., 2008].
For the SAMI3/Weimer configuration: High latitude electric field: calculated from the empirical Weimer model for the potential.
For the SAMI3/AMPERE configuration: High latitude electric field: calculated using the Magnetosphere-Ionosphere Coupling solver (MIX) developed by Merkin and Lyon (2010). The inputs to MIX are SAMI3's internal conductances, plus field-aligned current observations from Active Magnetosphere and Planetary Electrodynamics Response Experiment (AMPERE), derived from the 66+ satellite Iridium NEXT constellation's engineering magnetometer data. This potential calculation is described in Chartier et al (2022).
For ease of use, SAMI3 output is remapped to a regular grid using the Earth System Modeling Framework by Hill et al (2004)
Hydrological and meteorological information can help inform the conditions and risk factors related to the environment and their inhabitants. Due to the limitations of observation sampling, gridded data sets provide the modeled information for areas where data collection are infeasible using observations collected and known process relations. Although available, data users are faced with barriers to use, challenges like how to access, acquire, then analyze data for small watershed areas, when these datasets were produced for large, continental scale processes. In this tutorial, we introduce Observatory for Gridded Hydrometeorology (OGH) to resolve such hurdles in a use-case that incorporates NetCDF gridded data sets processes developed to interpret the findings and apply secondary modeling frameworks (landlab).
LEARNING OBJECTIVES - Familiarize with data management, metadata management, and analyses with gridded data - Inspecting and problem solving with Python libraries - Explore data architecture and processes - Learn about OGH Python Library - Discuss conceptual data engineering and science operations
Use-case operations: 1. Prepare computing environment 2. Get list of grid cells 3. NetCDF retrieval and clipping to a spatial extent 4. Extract NetCDF metadata and convert NetCDFs to 1D ASCII time-series files 5. Visualize the average monthly total precipitations 6. Apply summary values as modeling inputs 7. Visualize modeling outputs 8. Save results in a new HydroShare resource
For inquiries, issues, or contribute to the developments, please refer to https://github.com/freshwater-initiative/Observatory