Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Money Supply M2 in the United States increased to 21942 USD Billion in May from 21862.40 USD Billion in April of 2025. This dataset provides - United States Money Supply M2 - actual values, historical data, forecast, chart, statistics, economic calendar and news.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Money Supply M0 in the United States increased to 5748600 USD Million in June from 5648700 USD Million in May of 2025. This dataset provides - United States Money Supply M0 - actual values, historical data, forecast, chart, statistics, economic calendar and news.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Money Supply M1 in the United States increased to 18803 USD Billion in June from 18693 USD Billion in May of 2025. This dataset provides - United States Money Supply M1 - actual values, historical data, forecast, chart, statistics, economic calendar and news.
The U.S. Geological Survey (USGS) Water Resources Mission Area (WMA) is working to address a need to understand where the Nation is experiencing water shortages or surpluses relative to the demand for water need by delivering routine assessments of water supply and demand and an understanding of the natural and human factors affecting the balance between supply and demand. A key part of these national assessments is identifying long-term trends in water availability, including groundwater and surface water quantity, quality, and use. This data release contains Mann-Kendall monotonic trend analyses for 18 observed annual and monthly streamflow metrics at 6,347 U.S. Geological Survey streamgages located in the conterminous United States, Alaska, Hawaii, and Puerto Rico. Streamflow metrics include annual mean flow, maximum 1-day and 7-day flows, minimum 7-day and 30-day flows, and the date of the center of volume (the date on which 50% of the annual flow has passed by a gage), along with the mean flow for each month of the year. Annual streamflow metrics are computed from mean daily discharge records at U.S. Geological Survey streamgages that are publicly available from the National Water Information System (NWIS). Trend analyses are computed using annual streamflow metrics computed through climate year 2022 (April 2022- March 2023) for low-flow metrics and water year 2022 (October 2021 - September 2022) for all other metrics. Trends at each site are available for up to four different periods: (i) the longest possible period that meets completeness criteria at each site, (ii) 1980-2020, (iii) 1990-2020, (iv) 2000-2020. Annual metric time series analyzed for trends must have 80 percent complete records during fixed periods. In addition, each of these time series must have 80 percent complete records during their first and last decades. All longest possible period time series must be at least 10 years long and have annual metric values for at least 80% of the years running from 2013 to 2022. This data release provides the following five CSV output files along with a model archive: (1) streamflow_trend_results.csv - contains test results of all trend analyses with each row representing one unique combination of (i) NWIS streamgage identifiers, (ii) metric (computed using Oct 1 - Sep 30 water years except for low-flow metrics computed using climate years (Apr 1 - Mar 31), (iii) trend periods of interest (longest possible period through 2022, 1980-2020, 1990-2020, 2000-2020) and (iv) records containing either the full trend period or only a portion of the trend period following substantial increases in cumulative upstream reservoir storage capacity. This is an output from the final process step (#5) of the workflow. (2) streamflow_trend_trajectories_with_confidence_bands.csv - contains annual trend trajectories estimated using Theil-Sen regression, which estimates the median of the probability distribution of a metric for a given year, along with 90 percent confidence intervals (5th and 95h percentile values). This is an output from the final process step (#5) of the workflow. (3) streamflow_trend_screening_all_steps.csv - contains the screening results of all 7,873 streamgages initially considered as candidate sites for trend analysis and identifies the screens that prevented some sites from being included in the Mann-Kendall trend analysis. (4) all_site_year_metrics.csv - contains annual time series values of streamflow metrics computed from mean daily discharge data at 7,873 candidate sites. This is an output of Process Step 1 in the workflow. (5) all_site_year_filters.csv - contains information about the completeness and quality of daily mean discharge at each streamgage during each year (water year, climate year, and calendar year). This is also an output of Process Step 1 in the workflow and is combined with all_site_year_metrics.csv in Process Step 2. In addition, a .zip file contains a model archive for reproducing the trend results using R 4.4.1 statistical software. See the README file contained in the model archive for more information. Caution must be exercised when utilizing monotonic trend analyses conducted over periods of up to several decades (and in some places longer ones) due to the potential for confounding deterministic gradual trends with multi-decadal climatic fluctuations. In addition, trend results are available for post-reservoir construction periods within the four trend periods described above to avoid including abrupt changes arising from the construction of larger reservoirs in periods for which gradual monotonic trends are computed. Other abrupt changes, such as changes to water withdrawals and wastewater return flows, or episodic disturbances with multi-year recovery periods, such as wildfires, are not evaluated. Sites with pronounced abrupt changes or other non-monotonic trajectories of change may require more sophisticated trend analyses than those presented in this data release.
To aid in parameterization of mechanistic, statistical, and machine learning models of hydrologic systems in the contiguous United States (CONUS), flow-conditioned parameter grids (FCPGs) have been generated describing upstream basin mean elevation, slope, land cover class, latitude, and 30-year climatologies of mean total annual precipitation, minimum daily air temperature, and maximum daily air temperature. Additional datasets of upstream basin area and binary stream presence-absence are provided to help validate queries against the flow-conditioned data. These data are provided as virtual raster tile (vrt) mosaics of cloud optimized GeoTIFFs to allow point queries of the data (see Distribution Information) without requiring downloading the whole dataset.
This dataset contains annual peak-flow data, PeakFQ specifications, and results of flood-frequency analyses of annual peak flows for 368 selected streamflow gaging stations (streamgages) operated by the U.S. Geological Survey (USGS) in the Great Lakes and Ohio River basins. "PeakFQinput_all.txt" contains annual peak-flow data, ending in water year 2013, for all 368 streamgages in the study area. Annual peak-flow data were obtained from the USGS National Water Information System (NWIS) database (https://nwis.waterdata.usgs.gov/usa/nwis/peak). "PeakFQspec_all.psf" contains PeakFQ specifications for all 368 streamgages in the study area. The specifications were developed by hydrologists in the various USGS Water Science Centers that participated in the study. "PeakFQoutput_all.PRT" contains the results of flood-frequency analyses of annual peak-flow data, for each of the 368 streamgages in the study area, that were conducted using the Expected Moments Algorithm (England and others, 2018). Using the annual peak-flow data in "PeakFQinput_all.txt" and the specifications in "PeakFQspec_all.psf", "PeakFQoutput_all.PRT" was generated in version 7.2 of USGS flood-frequency analysis software PeakFQ (https://water.usgs.gov/software/PeakFQ/; Veilleux and others, 2014). Results of the flood-frequency analyses were used to estimate regional skew for the study area using Bayesian Weighted Least Squares / Bayesian Generalized Least Squares (B-WLS / B-GLS) regression.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Update NotesMar 16 2024, remove spaces in the file and folder names.Mar 31 2024, delete the underscore in the city names with a space (such as San Francisco) in the '02_TransCAD_results' folder to ensure correct data loading by TransCAD (software version: 9.0).Aug 31 2024, add the 'cityname_link_LinkFlows.csv' file in the '02_TransCAD_results' folder to match the link from input data and the link from TransCAD results (LinkFlows) with the same Link_ID.IntroductionThis is a unified and validated traffic dataset for 20 US cities. There are 3 folders for each city.01 Input datathe initial network data obtained from OpenStreetMap (OSM)the visualization of the OSM dataprocessed node / link / od data02 TransCAD results (software version: 9.0)cityname.dbd : geographical network database of the city supported by TransCAD (version 9.0)cityname_link.shp / cityname_node.shp : network data supported by GIS software, which can be imported into TransCAD manually. Then the corresponding '.dbd' file can be generated for TransCAD with a version lower than 9.0od.mtx : OD matrix supported by TransCADLinkFlows.bin / LinkFlows.csv : traffic assignment results by TransCADcityname_link_LinkFlows.csv: the input link attributes with the traffic assignment results by TransCADShortestPath.mtx / ue_travel_time.csv : the traval time (min) between OD pairs by TransCAD03 AequilibraE results (software version: 0.9.3)cityname.shp : shapefile network data of the city support by QGIS or other GIS softwareod_demand.aem : OD matrix supported by AequilibraEnetwork.csv : the network file used for traffic assignment in AequilibraEassignment_result.csv : traffic assignment results by AequilibraEPublicationXu, X., Zheng, Z., Hu, Z. et al. (2024). A unified dataset for the city-scale traffic assignment model in 20 U.S. cities. Sci Data 11, 325. https://doi.org/10.1038/s41597-024-03149-8Usage NotesIf you use this dataset in your research or any other work, please cite both the dataset and paper above.A brief introduction about how to use this dataset can be found in GitHub. More detailed illustration for compiling the traffic dataset on AequilibraE can be referred to GitHub code or Colab code.ContactIf you have any inquiries, please contact Xiaotong Xu (email: kid-a.xu@connect.polyu.hk).
Point of Interest (POI) is defined as an entity (such as a business) at a ground location (point) which may be (of interest). We provide high-quality POI data that is fresh, consistent, customizable, easy to use and with high-density coverage for all countries of the world.
This is our process flow:
Our machine learning systems continuously crawl for new POI data
Our geoparsing and geocoding calculates their geo locations
Our categorization systems cleanup and standardize the datasets
Our data pipeline API publishes the datasets on our data store
A new POI comes into existence. It could be a bar, a stadium, a museum, a restaurant, a cinema, or store, etc.. In today's interconnected world its information will appear very quickly in social media, pictures, websites, press releases. Soon after that, our systems will pick it up.
POI Data is in constant flux. Every minute worldwide over 200 businesses will move, over 600 new businesses will open their doors and over 400 businesses will cease to exist. And over 94% of all businesses have a public online presence of some kind tracking such changes. When a business changes, their website and social media presence will change too. We'll then extract and merge the new information, thus creating the most accurate and up-to-date business information dataset across the globe.
We offer our customers perpetual data licenses for any dataset representing this ever changing information, downloaded at any given point in time. This makes our company's licensing model unique in the current Data as a Service - DaaS Industry. Our customers don't have to delete our data after the expiration of a certain "Term", regardless of whether the data was purchased as a one time snapshot, or via our data update pipeline.
Customers requiring regularly updated datasets may subscribe to our Annual subscription plans. Our data is continuously being refreshed, therefore subscription plans are recommended for those who need the most up to date data. The main differentiators between us vs the competition are our flexible licensing terms and our data freshness.
Data samples may be downloaded at https://store.poidata.xyz/us
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
A map service depicting modeled streamflow metrics from the historical time period (1977-2006) in the United States. In addition to standard NHD attributes, the streamflow datasets include metrics on mean daily flow (annual and seasonal), flood levels associated with 1.5-year, 10-year, and 25-year floods; annual and decadal minimum weekly flows and date of minimum weekly flow, center of flow mass date; baseflow index, and average number of winter floods. These files and additional information are available on the project website, https://www.fs.usda.gov/rm/boise/AWAE/projects/modeled_stream_flow_metrics.shtml. Streams without flow metrics (null values) were removed from this dataset to improve display speed; to see all stream lines, use an NHD flowline dataset.The flow regime is of fundamental importance in determining the physical and ecological characteristics of a river or stream, but actual flow measurements are only available for a small minority of stream segments, mostly on large rivers. Flows for all other streams must be extrapolated or modeled. Modeling is also necessary to estimate flow regimes under future climate conditions. Climate data such as this dataset is valuable for planning and monitoring purposes. Business use cases include: climate change and water rights assessments; analysis of water availability, runoff, groundwater, and impacts to aquatic organisms; resource management; post fire recovery; restoration activities, etc.Hydro flow metrics data can be downloaded from here.This feature layer contains a series of fields from the NHD, including the COMID , which provides a unique identifier for each NHD stream segment, as well as other basic hydrological information. It also contains the Region field, which indicates the NHD region (2-digit hydrologic unit codes) or a subdivision of regions based on NHDPlus production units (https://www.horizon-systems.com/NHDPlus/). Production units are designated by letters appended to the region code, such as 10U (the upper Missouri River basin). Additional documentation about this dataset is located in the data user guide. A StoryMap including a map viewer and map exporter by forest/region is also available. Additional climate and streamflow products from the Office of Sustainability and Climate are available in our Climate Gallery.This dataset contains the following data layers:Mean annual flow: calculated as the mean of the yearly discharge valuesMean spring flow: calculated as the mean of the March/April/May discharge values, weighted by the number of days per monthMean summer flow: calculated as the mean of the June/July/August discharge values, weighted by the number of days per monthMean autumn flow: calculated as the mean of the September/October/November discharge values, weighted by the number of days per monthMean winter flow: calculated as the mean of the December/January/February discharge values, weighted by the number of days per month1.5-year flood: calculated by first finding the greatest daily flow from each year; the 33rd percentile of the annual maximum series defines the flow that occurs every 1.5 years, on average10-year flood: the flow that occurs every 10 years, on average, calculated as the 90th percentile of the annual maximum series25-year flood: the flow that occurs every 25 years, on average, calculated as the 96th percentile of the annual maximum series1-year minimum weekly flow: the average across years of the lowest 7-day flow during each year. Year is defined either as January/December or June/May, whichever has a lower standard deviation in the date of the low-flow week. This was done so that, for example, in areas with winter droughts, a December to January drought would not be split up by the start of a new year.10-year minimum weekly flow: average lowest 7-day flow during a decade (calculated as the 10th percentile of the annual minimum weekly flows)Date of minimum weekly flow: average date of the center of the lowest 7-day flow of the year, with 'year' defined either as January/December or June/May, whichever has a lower standard deviation in the date of the low-flow week. This was done to prevent erroneous results when the drought season crosses the break between years: e.g., if the lowest flow was on December 31 of the first year (day #365) and January 1 of the second year (day #1), this would give an average of day #183, July 2nd; switching the range of months in this case prevents this error.Baseflow index: the ratio of the average daily flow during the lowest 7-day flow of the year to the average daily flow during the year overall; this can be used as a rough estimate of the proportion of streamflow originating from groundwater discharge, rather than from recent precipitationCenter of flow mass/center of timing: calculated using a weighted mean: CFM=(flow1*1+flow2*2+ flow365*365)/(flow1+flow2+ flow365) where flowi is the flow volume on day i of the water year. This can be used to indicate areas where most of the precipitation occurs early in the water year (fall), or later (spring/summer).Number of winter floods: calculated as the average number of daily flows between December 1 and March 31 that exceed the 95th percentile of daily flows across the entire year
https://choosealicense.com/licenses/other/https://choosealicense.com/licenses/other/
Renai Circulation is a Image dataset from certain image website.
How is it made?
A massive scrape was done on archive.org back in 2023. Due to that it's in warc files (For obvious reasons), it's extremely unweildy to process. As such we did the following:
Download the megawarc.warc Process html (pages & comments) to compacted json data. Save images as-is.
NFAA?
Yes, it contains content that is permitted in Japan I have seen stuff that people post on the site.… See the full description on the dataset page: https://huggingface.co/datasets/DSULT-Core/Renai-Circulation.
The National Forest Climate Change Maps project was developed by the Rocky Mountain Research Station (RMRS) and the Office of Sustainability and Climate to meet the needs of national forest managers for information on projected climate changes at a scale relevant to decision making processes, including forest plans. The maps use state-of-the-art science and are available for every national forest in the contiguous United States with relevant data coverage. Currently, the map sets include variables related to precipitation, air temperature, snow (including snow residence time and April 1 snow water equivalent), and stream flow.
Historical (1975-2005) and future (2071-2090) precipitation and temperature data for the state of Alaska were developed by the Scenarios Network for Alaska and Arctic Planning (SNAP) (https://snap.uaf.edu). Average temperature values were calculated as the mean of monthly minimum and maximum air temperature values (degrees C), averaged over the season of interest (annual, winter, or summer). These datasets have several important differences from the MACAv2-Metdata (https://climate.northwestknowledge.net/MACA/) products, used in the contiguous U.S. They were developed using different global circulation models and different downscaling methods, and were downscaled to a different scale (771 m instead of 4 km). While these cover the same time periods and use broadly similar approaches, caution should be used when directly comparing values between Alaska and the contiguous United States.
Raster data are also available for download from RMRS site (https://www.fs.usda.gov/rm/boise/AWAE/projects/NFS-regional-climate-change-maps/categories/us-raster-layers.html), along with pdf maps and detailed metadata (https://www.fs.usda.gov/rm/boise/AWAE/projects/NFS-regional-climate-change-maps/downloads/NationalForestClimateChangeMapsMetadata.pdf).
This dataset contains the empirical flow-duration curves (FDCs) derived from complete water years of daily streamflow data for 1,378 streamgages in 19 study regions in the conterminous U.S. from October 1, 1980 through September 30, 2013 from mostly undisturbed watersheds contained in child item 1, "Daily streamflow data for selected streamgages in the conterminous United States", of this data release. The empirical FDCs are presented as 27 quantiles ranging from 0.02 to 99.98 percent nonexceedance probabilities. Because streamflow data less than 0.005 cfs are reported as zero, they are considered to be censored values. To handle these censored data values, two versions of the FDC quantiles from streamgage records were computed: (1) empFDCs.unfilled.xlsx - where the quantiles were estimated from the original data and (2) empFDCs.filled.xlsx – where the censored quantile values were filled with estimated positive values. With the method used for filling the censored quantiles, which relies on a lognormal fit to the data, occasionally the data values estimated for the largest censored values were larger than the smallest noncensored data values. This sometimes resulted in increases to the quantiles greater than the censoring level. As a result, some of the noncensored flow quantile values in the filled dataset are greater than the corresponding noncensored flow quantile values in the unfilled dataset. Methods are fully described by Over and others (2018).
The USCC-1 - Amounts Outstanding and in Circulation table informs the public of the total face value of currency and coin used as a medium of exchange that is in circulation. It defines the total amount of currency and coin outstanding and the portion deemed to be in circulation. It includes some old and current rare issues that do not circulate or that may do so to a limited extent. Treasury includes them in the statement because the issues were originally intended for general circulation. The USCC comes from monthly reports compiled by Treasury offices, U.S. Mint offices, the Federal Reserve banks (FRBs), and the Federal Reserve Board.
VITAL SIGNS INDICATOR Migration (EQ4)
FULL MEASURE NAME Migration flows
LAST UPDATED December 2018
DESCRIPTION Migration refers to the movement of people from one location to another, typically crossing a county or regional boundary. Migration captures both voluntary relocation – for example, moving to another region for a better job or lower home prices – and involuntary relocation as a result of displacement. The dataset includes metropolitan area, regional, and county tables.
DATA SOURCE American Community Survey County-to-County Migration Flows 2012-2015 5-year rolling average http://www.census.gov/topics/population/migration/data/tables.All.html
CONTACT INFORMATION vitalsigns.info@bayareametro.gov
METHODOLOGY NOTES (across all datasets for this indicator) Data for migration comes from the American Community Survey; county-to-county flow datasets experience a longer lag time than other standard datasets available in FactFinder. 5-year rolling average data was used for migration for all geographies, as the Census Bureau does not release 1-year annual data. Data is not available at any geography below the county level; note that flows that are relatively small on the county level are often within the margin of error. The metropolitan area comparison was performed for the nine-county San Francisco Bay Area, in addition to the primary MSAs for the nine other major metropolitan areas, by aggregating county data based on current metropolitan area boundaries. Data prior to 2011 is not available on Vital Signs due to inconsistent Census formats and a lack of net migration statistics for prior years. Only counties with a non-negligible flow are shown in the data; all other pairs can be assumed to have zero migration.
Given that the vast majority of migration out of the region was to other counties in California, California counties were bundled into the following regions for simplicity: Bay Area: Alameda, Contra Costa, Marin, Napa, San Francisco, San Mateo, Santa Clara, Solano, Sonoma Central Coast: Monterey, San Benito, San Luis Obispo, Santa Barbara, Santa Cruz Central Valley: Fresno, Kern, Kings, Madera, Merced, Tulare Los Angeles + Inland Empire: Imperial, Los Angeles, Orange, Riverside, San Bernardino, Ventura Sacramento: El Dorado, Placer, Sacramento, Sutter, Yolo, Yuba San Diego: San Diego San Joaquin Valley: San Joaquin, Stanislaus Rural: all other counties (23)
One key limitation of the American Community Survey migration data is that it is not able to track emigration (movement of current U.S. residents to other countries). This is despite the fact that it is able to quantify immigration (movement of foreign residents to the U.S.), generally by continent of origin. Thus the Vital Signs analysis focuses primarily on net domestic migration, while still specifically citing in-migration flows from countries abroad based on data availability.
The National Forest Climate Change Maps project was developed by the Rocky Mountain Research Station (RMRS) and the Office of Sustainability and Climate to meet the needs of national forest managers for information on projected climate changes at a scale relevant to decision making processes, including forest plans. The maps use state-of-the-art science and are available for every national forest in the contiguous United States with relevant data coverage. Currently, the map sets include variables related to precipitation, air temperature, snow (including snow residence time and April 1 snow water equivalent), and stream flow.
Historical (1975-2005) and future (2071-2090) precipitation and temperature data for the state of Alaska were developed by the Scenarios Network for Alaska and Arctic Planning (SNAP) (https://snap.uaf.edu). Monthly precipitation values (mm) were summed over the season of interest (annual, winter, or summer). These datasets have several important differences from the MACAv2-Metdata (https://climate.northwestknowledge.net/MACA/) products, used in the contiguous U.S. They were developed using different global circulation models and different downscaling methods, and were downscaled to a different scale (771 m instead of 4 km). While these cover the same time periods and use broadly similar approaches, caution should be used when directly comparing values between Alaska and the contiguous United States.
Raster data are also available for download from RMRS site (https://www.fs.fed.us/rm/boise/AWAE/projects/NFS-regional-climate-change-maps/categories/us-raster-layers.html), along with pdf maps and detailed metadata (https://www.fs.fed.us/rm/boise/AWAE/projects/NFS-regional-climate-change-maps/downloads/NationalForestClimateChangeMapsMetadata.pdf).
https://farm8.staticflickr.com/7897/32066717787_ae63d9a8bd.jpg" />
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The United States recorded a capital and financial account surplus of 311100 USD Million in May of 2025. This dataset provides the latest reported value for - United States Net Treasury International Capital Flows - plus previous releases, historical high and low, short-term forecast and long-term prediction, economic calendar, survey consensus and news.
The National Forest Climate Change Maps project was developed by the Rocky Mountain Research Station (RMRS) and the Office of Sustainability and Climate to meet the needs of national forest managers for information on projected climate changes at a scale relevant to decision making processes, including forest plans. The maps use state-of-the-art science and are available for every national forest in the contiguous United States with relevant data coverage. Currently, the map sets include variables related to precipitation, air temperature, snow (including snow residence time and April 1 snow water equivalent), and stream flow.Historical (1975-2005) and future (2071-2090) precipitation and temperature data for the state of Alaska were developed by the Scenarios Network for Alaska and Arctic Planning (SNAP) (https://snap.uaf.edu). Monthly precipitation values (mm) were summed over the season of interest (annual, winter, or summer). These datasets have several important differences from the MACAv2-Metdata (https://climate.northwestknowledge.net/MACA/) products, used in the contiguous U.S. They were developed using different global circulation models and different downscaling methods, and were downscaled to a different scale (771 m instead of 4 km). While these cover the same time periods and use broadly similar approaches, caution should be used when directly comparing values between Alaska and the contiguous United States.Raster data are also available for download from RMRS site (https://www.fs.usda.gov/rm/boise/AWAE/projects/NFS-regional-climate-change-maps/categories/us-raster-layers.html), along with pdf maps and detailed metadata (https://www.fs.usda.gov/rm/boise/AWAE/projects/NFS-regional-climate-change-maps/downloads/NationalForestClimateChangeMapsMetadata.pdf).
The Vegetation/Ecosystem Modeling and Analysis Project (VEMAP) Phase 2 has developed a number of transient climate change scenarios based on coupled atmosphere-ocean general circulation model (AOGCM) transient climate experiments. The purpose of these scenarios is to reflect time-dependent changes in surface climate from AOGCMs in terms of both (1) long-term trends and (2) changes in multiyear (3-5 yr) to decadal variability patterns, such as El Nino/Southern Oscillation (ENSO). Development of the data set is reported in Kittel et al. (1997). Scenarios have been derived from transient greenhouse gas experiments with sulfate aerosols from the Canadian Climate Center (CCC) and the Hadley Centre (HADCM2; Mitchell et al. 1995, Johns et al. 1997) accessed via the Climate Impacts LINK Project, Climatic Research Unit, University of East Anglia. Scenarios were developed for the following variables: total incident solar radiation, minimum and maximum temperature, vapor pressure, precipitation, relative humidity and mean monthly irradiance for the time periods January 1994 to approximately 2100. These data and the VEMAP 1 data (Kittel et al. 1995) were used to drive models in VEMAP Phase 2, the objectives of which are to compare time-dependent ecological responses of biogeochemical and coupled biogeochemical-biogeographical models to historical and projected transient forcings across the conterminous U.S. This data set of annual climate change scenarios was designed to be concatenated with the /VEMAP/vemap.html">VEMAP 2: U.S. Annual Climate, 1895-1993 data set to create a single climate series from 1895 - ~2100. This data set is being made available for the U.S. National Assessment. Users are requested to confer with the NCAR VEMAP Data Group to ensure that the intended application of the data set is constistent with the generation and limiations of the data. For more information, refer to the VEMAP homepage. Data Citation The data set should be cited as follows: Kittel, T. G. F., N. A. Rosenbloom, C. Kaufman, J. A. Royle, C. Daly, H. H. Fisher, W. P. Gibson, S. Aulenbach, D. N. Yates, R. McKeown, D. S. Schimel, and VEMAP 2 Participants. 2001. VEMAP 2: U. S. Annual Climate Change Scenarios. Available on-line from Oak Ridge National Laboratory Distributed Active Archive Center, Oak Ridge, Tennessee, U.S.A.
The National Forest Climate Change Maps project was developed by the Rocky Mountain Research Station (RMRS) and the Office of Sustainability and Climate to meet the needs of national forest managers for information on projected climate changes at a scale relevant to decision making processes, including forest plans. The maps use state-of-the-art science and are available for every national forest in the contiguous United States with relevant data coverage. Currently, the map sets include variables related to precipitation, air temperature, snow (including snow residence time and April 1 snow water equivalent), and stream flow.Historical (1975-2005) and future (2071-2090) precipitation and temperature data for the state of Alaska were developed by the Scenarios Network for Alaska and Arctic Planning (SNAP) (https://snap.uaf.edu). Monthly precipitation values (mm) were summed over the season of interest (annual, winter, or summer). These datasets have several important differences from the MACAv2-Metdata (https://climate.northwestknowledge.net/MACA/) products, used in the contiguous U.S. They were developed using different global circulation models and different downscaling methods, and were downscaled to a different scale (771 m instead of 4 km). While these cover the same time periods and use broadly similar approaches, caution should be used when directly comparing values between Alaska and the contiguous United States.Raster data are also available for download from RMRS site (https://www.fs.usda.gov/rm/boise/AWAE/projects/NFS-regional-climate-change-maps/categories/us-raster-layers.html), along with pdf maps and detailed metadata (https://www.fs.usda.gov/rm/boise/AWAE/projects/NFS-regional-climate-change-maps/downloads/NationalForestClimateChangeMapsMetadata.pdf).
Summary of datasets for flooding analysis reported in Wobus et al. (2019) 1) Wobus_Flood_Damages_huc10.xlsx – contains total flood damages for floods of each specified recurrence interval, organized by HUC10, for watersheds with RiskMAP data from 3 or 5 recurrence intervals. Also includes lookup table to crosswalk from HUC10 to NCA region, and from HUC10 to the ReachID associated with the modeled flow data. 2) CONUS_model_dT.xlsx – contains the year that each of the 29 models evaluated meets the specified temperature threshold relative to 2001-2020 baseline, on a CONUS average basis. Sheet “Summary” in that workbook is the main result from that analysis. 3) annmaxs_all85.zip – Contains annual maximum timeseries from all nodes in the geospatial fabric as modeled by NCAR/USBR in the VIC downscaled hydrologic dataset. These are the raw data used to generate extreme value statistics for baseline (2001-2020) and future time periods (see below). Variables are “reaches” (57116x1) with reachIDs; “modelnames85” (1x29) with model IDs; “years” (150x1) with years from 1950-2100; and “annmaxs_rcp85” (57116x150x29) with annual maximum flow values by reachID, year and model. Note the step function in annual maximum flows as reported by Wobus et al. (2017) in the year 2000 – do NOT compare pre-2000 vs post-2000 data. Citation information for this dataset can be found in the EDG's Metadata Reference Information section and Data.gov's References section.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Money Supply M2 in the United States increased to 21942 USD Billion in May from 21862.40 USD Billion in April of 2025. This dataset provides - United States Money Supply M2 - actual values, historical data, forecast, chart, statistics, economic calendar and news.