22 datasets found
  1. Descriptive statistics of the dataset with mean, standard deviation (SD),...

    • plos.figshare.com
    xls
    Updated Jun 14, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Achim Langenbucher; Nóra Szentmáry; Alan Cayless; Jascha Wendelstein; Peter Hoffmann (2023). Descriptive statistics of the dataset with mean, standard deviation (SD), median, and the lower (quantile 5%) and upper (quantile 95%) boundary of the 90% confidence interval. [Dataset]. http://doi.org/10.1371/journal.pone.0267352.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 14, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Achim Langenbucher; Nóra Szentmáry; Alan Cayless; Jascha Wendelstein; Peter Hoffmann
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Descriptive statistics of the dataset with mean, standard deviation (SD), median, and the lower (quantile 5%) and upper (quantile 95%) boundary of the 90% confidence interval.

  2. Weather and Housing in North America

    • kaggle.com
    zip
    Updated Feb 13, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Devastator (2023). Weather and Housing in North America [Dataset]. https://www.kaggle.com/datasets/thedevastator/weather-and-housing-in-north-america
    Explore at:
    zip(512280 bytes)Available download formats
    Dataset updated
    Feb 13, 2023
    Authors
    The Devastator
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Area covered
    North America
    Description

    Weather and Housing in North America

    Exploring the Relationship between Weather and Housing Conditions in 2012

    By [source]

    About this dataset

    This comprehensive dataset explores the relationship between housing and weather conditions across North America in 2012. Through a range of climate variables such as temperature, wind speed, humidity, pressure and visibility it provides unique insights into the weather-influenced environment of numerous regions. The interrelated nature of housing parameters such as longitude, latitude, median income, median house value and ocean proximity further enhances our understanding of how distinct climates play an integral part in area real estate valuations. Analyzing these two data sets offers a wealth of knowledge when it comes to understanding what factors can dictate the value and comfort level offered by residential areas throughout North America

    More Datasets

    For more datasets, click here.

    Featured Notebooks

    • 🚨 Your notebook can be here! 🚨!

    How to use the dataset

    This dataset offers plenty of insights into the effects of weather and housing on North American regions. To explore these relationships, you can perform data analysis on the variables provided.

    First, start by examining descriptive statistics (i.e., mean, median, mode). This can help show you the general trend and distribution of each variable in this dataset. For example, what is the most common temperature in a given region? What is the average wind speed? How does this vary across different regions? By looking at descriptive statistics, you can get an initial idea of how various weather conditions and housing attributes interact with one another.

    Next, explore correlations between variables. Are certain weather variables correlated with specific housing attributes? Is there a link between wind speeds and median house value? Or between humidity and ocean proximity? Analyzing correlations allows for deeper insights into how different aspects may influence one another for a given region or area. These correlations may also inform broader patterns that are present across multiple North American regions or countries.

    Finally, use visualizations to further investigate this relationship between climate and housing attributes in North America in 2012. Graphs allow you visualize trends like seasonal variations or long-term changes over time more easily so they are useful when interpreting large amounts of data quickly while providing larger context beyond what numbers alone can tell us about relationships between different aspects within this dataset

    Research Ideas

    • Analyzing the effect of climate change on housing markets across North America. By looking at temperature and weather trends in combination with housing values, researchers can better understand how climate change may be impacting certain regions differently than others.
    • Investigating the relationship between median income, house values and ocean proximity in coastal areas. Understanding how ocean proximity plays into housing prices may help inform real estate investment decisions and urban planning initiatives related to coastal development.
    • Utilizing differences in weather patterns across different climates to determine optimal seasonal rental prices for property owners. By analyzing changes in temperature, wind speed, humidity, pressure and visibility from season to season an investor could gain valuable insights into seasonal market trends to maximize their profits from rentals or Airbnb listings over time

    Acknowledgements

    If you use this dataset in your research, please credit the original authors. Data Source

    License

    License: CC0 1.0 Universal (CC0 1.0) - Public Domain Dedication No Copyright - You can copy, modify, distribute and perform the work, even for commercial purposes, all without asking permission. See Other Information.

    Columns

    File: Weather.csv | Column name | Description | |:---------------------|:-----------------------------------------------| | Date/Time | Date and time of the observation. (Date/Time) | | Temp_C | Temperature in Celsius. (Numeric) | | Dew Point Temp_C | Dew point temperature in Celsius. (Numeric) | | Rel Hum_% | Relative humidity in percent. (Numeric) | | Wind Speed_km/h | Wind speed in kilometers per hour. (Numeric) | | Visibility_km | Visibilit...

  3. d

    The median of bottom shear stress for the Gulf of Maine south into the...

    • catalog.data.gov
    • s.cnmilf.com
    • +2more
    Updated Nov 19, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2025). The median of bottom shear stress for the Gulf of Maine south into the Middle Atlantic Bight, May 2010 to May 2011 (GMAINE_median.shp, Geographic, WGS 84) [Dataset]. https://catalog.data.gov/dataset/the-median-of-bottom-shear-stress-for-the-gulf-of-maine-south-into-the-middle-atlantic-big
    Explore at:
    Dataset updated
    Nov 19, 2025
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Area covered
    Gulf of Maine
    Description

    The U.S. Geological Survey has been characterizing the regional variation in shear stress on the sea floor and sediment mobility through statistical descriptors. The purpose of this project is to identify patterns in stress in order to inform habitat delineation or decisions for anthropogenic use of the continental shelf. The statistical characterization spans the continental shelf from the coast to approximately 120 m water depth, at approximately 0.03 degree (2.5-3.75 km, depending on latitude) resolution. Time-series of wave and circulation are created using numerical models, and near-bottom output of steady and oscillatory velocities and an estimate of bottom roughness are used to calculate a time-series of bottom shear stress at 1-hour intervals. Statistical descriptions such as the median and 95th percentile, which are the output included with this database, are then calculated to create a two-dimensional picture of the regional patterns in shear stress. In addition, time-series of stress are compared to critical stress values at select points calculated from observed surface sediment texture data to determine estimates of sea floor mobility.

  4. V

    Virginia Median Household Income in the Past 12 Months by Census Block Group...

    • data.virginia.gov
    csv
    Updated Jan 3, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Office of INTERMODAL Planning and Investment (2025). Virginia Median Household Income in the Past 12 Months by Census Block Group (ACS 5-Year) [Dataset]. https://data.virginia.gov/dataset/virginia-median-household-income-in-the-past-12-months-by-census-block-group-acs-5-year
    Explore at:
    csv(6955260)Available download formats
    Dataset updated
    Jan 3, 2025
    Dataset authored and provided by
    Office of INTERMODAL Planning and Investment
    Description

    2013-2023 Virginia Median Household Income based on the past 12 months by Census Block Group. Contains estimates and margins of error.

    Special data considerations: Large negative values do exist (more detail below) and should be addressed prior to graphing or aggregating the data.

    A value of -666,666,666 in the estimate column indicates that either no sample observations or too few sample observations were available to compute an estimate, or a ratio of medians cannot be calculated because one or both of the median estimates falls in the lowest interval or upper interval of an open-ended distribution.

    A value of -222,222,222 in the margin of error column indicates that either no sample observations or too few sample observations were available to compute a standard error and thus the margin of error. A statistical test is not appropriate.

    U.S. Census Bureau; American Community Survey, American Community Survey 5-Year Estimates, Table B19013 Data accessed from: Census Bureau's API for American Community Survey (https://www.census.gov/data/developers/data-sets.html)

    The United States Census Bureau's American Community Survey (ACS): -What is the American Community Survey? (https://www.census.gov/programs-surveys/acs/about.html) -Geography & ACS (https://www.census.gov/programs-surveys/acs/geography-acs.html) -Technical Documentation (https://www.census.gov/programs-surveys/acs/technical-documentation.html)

    Supporting documentation on code lists, subject definitions, data accuracy, and statistical testing can be found on the American Community Survey website in the Technical Documentation section. (https://www.census.gov/programs-surveys/acs/technical-documentation/code-lists.html)

    Sample size and data quality measures (including coverage rates, allocation rates, and response rates) can be found on the American Community Survey website in the Methodology section. (https://www.census.gov/acs/www/methodology/sample_size_and_data_quality/)

    Although the American Community Survey (ACS) produces population, demographic and housing unit estimates, it is the Census Bureau's Population Estimates Program that produces and disseminates the official estimates of the population for the nation, states, counties, cities, and towns and estimates of housing units for states and counties.

    Data are based on a sample and are subject to sampling variability. The degree of uncertainty for an estimate arising from sampling variability is represented through the use of a margin of error. The value shown here is the 90 percent margin of error. The margin of error can be interpreted roughly as providing a 90 percent probability that the interval defined by the estimate minus the margin of error and the estimate plus the margin of error (the lower and upper confidence bounds) contains the true value. In addition to sampling variability, the ACS estimates are subject to nonsampling error (for a discussion of nonsampling variability, see ACS Technical Documentation https://www.census.gov/programs-surveys/acs/technical-documentation.html). The effect of nonsampling error is not represented in these tables.

    Annotation values are character representations of estimates and have values when non-integer information needs to be represented. Below are a few examples. Complete information is available on the ACS website under Notes on ACS Estimate and Annotation Values. (https://www.census.gov/data/developers/data-sets/acs-1year/notes-on-acs-estimate-and-annotation-values.html).

  5. f

    Data from: Mean and Variance Corrected Test Statistics for Structural...

    • tandf.figshare.com
    txt
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yubin Tian; Ke-Hai Yuan (2023). Mean and Variance Corrected Test Statistics for Structural Equation Modeling with Many Variables [Dataset]. http://doi.org/10.6084/m9.figshare.10012976.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    Taylor & Francis
    Authors
    Yubin Tian; Ke-Hai Yuan
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Data in social and behavioral sciences are routinely collected using questionnaires, and each domain of interest is tapped by multiple indicators. Structural equation modeling (SEM) is one of the most widely used methods to analyze such data. However, conventional methods for SEM face difficulty when the number of variables (p) is large even when the sample size (N) is also rather large. This article addresses the issue of model inference with the likelihood ratio statistic Tml. Using the method of empirical modeling, mean-and-variance corrected statistics for SEM with many variables are developed. Results show that the new statistics not only perform much better than Tml but also are substantial improvements over other corrections to Tml. When combined with a robust transformation, the new statistics also perform well with non-normally distributed data.

  6. Individual weight estimates for Great Lakes benthic invertebrates

    • data.niaid.nih.gov
    • datadryad.org
    zip
    Updated Aug 1, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Allison Hrycik; Lyubov Burlakova; Alexander Karatayev; Susan Daniel; Ronald Dermott; Morgan Tarbell; Elizabeth Hinchey (2024). Individual weight estimates for Great Lakes benthic invertebrates [Dataset]. http://doi.org/10.5061/dryad.tx95x6b42
    Explore at:
    zipAvailable download formats
    Dataset updated
    Aug 1, 2024
    Dataset provided by
    Rensselaer Polytechnic Institute
    Environmental Protection Agency
    New York State Department of Health
    Fisheries and Oceans Canada
    Buffalo State University
    Authors
    Allison Hrycik; Lyubov Burlakova; Alexander Karatayev; Susan Daniel; Ronald Dermott; Morgan Tarbell; Elizabeth Hinchey
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Area covered
    The Great Lakes
    Description

    We present mean individual weights for common benthic invertebrates of the Great Lakes collected from over 2,000 benthic samples and eight years of data collection (2012-2019), both as species-specific weights and average weights of larger taxonomic groups of interest. The dataset we have assembled is applicable to food web energy flow models, calculation of secondary production estimates, interpretation of trophic markers, and for understanding how biomass distribution varies by benthic invertebrate species in the Great Lakes. A corresponding data paper describes comparisons of these data to benthic invertebrates in other lakes. Methods Data Collection Benthic invertebrates were collected from the EPA R/V Lake Guardian from 2012-2019 as part of the EPA Great Lakes National Program Office GLBMP and Cooperative Science and Monitoring Initiative (CSMI) benthic surveys. GLBMP samples are collected in all five of the Great Lakes annually and CSMI samples are collected in one of the Great Lakes annually. GLBMP includes 57-63 stations each year: 11 in Lake Superior (and 2-7 additional stations since 2014), 11 in Lake Huron, 16 in Lake Michigan, 10 in Lake Erie, and 10 (9 since 2015) in Lake Ontario. The number of CSMI stations vary by year. CSMI surveys for each lake took place in the following years: Erie 2014 (97 stations), Michigan 2015 (140 stations), Superior 2016 (59 stations), Huron 2017 (118 stations), and Ontario 2018 (46 stations). Additional CSMI surveys have occurred since 2019, however, we did not include these survey data in our analysis because samples would be unbalanced with some lakes sampled twice and other lakes sampled only once. We followed EPA Standard Operating Procedures for Benthic Invertebrate Field Sampling SOP LG406 (U.S. EPA, 2021). In short, triplicate samples were collected from each station using a Ponar grab (sampling area = 0.0523 m2 for all surveys except Lake Michigan CSMI, for which sampling area = 0.0483 m2) then rinsed through 500 µm mesh. Samples were preserved with 5-10% neutral buffered formalin with Rose Bengal stain. Lab Processing Samples were processed in the lab after preservation following EPA Standard Operating Procedure for Benthic Invertebrate Laboratory Analysis SOP LG407 (U.S. EPA, 2015). Briefly, organisms were picked out of samples using a low-magnification dissecting microscope then each organism was identified to the finest taxonomic resolution possible (usually species). Individuals of the same species, or size category, were blotted dry on cellulose filter paper to remove external water until the wet spots left by animal(s) on the absorbent paper disappeared. Blotting time varied based on the surface area/volume ratio of the organisms but was approximately one minute for large and medium chironomids and oligochaetes and less time (0.6 min) for smaller chironomids and oligochaetes. Care was taken to ensure that the procedure did not cause damage to the specimens. Larger organisms (e.g., dreissenids) often took longer to blot dry. All organisms in a sample within a given taxonomic unit were weighed together to the nearest 0.0001 g (WW). Dreissena were weighed by 5 mm size category (size fractions: 0-4.99 mm, 5-9.99 mm, etc.) to nearest 0.0001 g (shell and tissue WW). Data Analysis To calculate the total weight for each species that was mounted on slides by size groups for identification (e.g., Oligochaeta, Chironomidae), we multiplied the number of individuals of the species binned into each size category by the average weight of individuals in that category. If a species was found in more than one size category, we summed the weight of the species across all categories per sample. Oligochaetes often fragment in samples, and thus, were counted by tallying the number of oligochaete heads (anterior ends with prostomium) present in the sample. Oligochaete fragments were also counted and weighed for inclusion in biomass calculations. We set the cutoff for the minimum number of samples to calculate individual weights to ten samples (see companion data paper for details). Therefore, in our further analysis we only calculated individual weights when a taxonomic unit was found in at least ten samples. Species that were found in fewer than ten samples were excluded from the analysis. We calculated wet weights by species whenever possible. If species were closely related, had similar body size (based on our previous experience), and were found in few samples, they were grouped together to achieve our minimum sample size of ten. For some taxa (e.g., Chironomidae), individual species could not be identified so calculations were made at the finest taxonomic resolution possible (usually genus). We hereafter refer to the two taxonomic groupings of closely related species and taxa that could not be identified to species as “taxonomic units.” For each taxonomic unit, we calculated several summary statistics on wet weight: mean, minimum, and maximum weight, median weight, standard error of mean weight, and sample size (number of samples in which a taxonomic unit was present). We performed Kruskal-Wallis tests (Kruskal & Wallis, 1952) to determine when individuals within a species could be grouped by depth zone and/or lake when sample size was large enough (species found in ≥10 samples per group) to permit splitting because we expected species weight to differ by depth zone and/or lake. In all five Great Lakes, benthic density and species richness are greater at stations ≤70 m than at stations deeper than 70 m (Burlakova et al., 2018; Cook & Johnson, 1974). The 70 m depth contour separation of benthos mirrors a breakpoint in spring chlorophyll concentrations observed for these stations, suggesting that lake productivity is likely the major driver of benthic abundance and diversity across lakes (Burlakova et al., 2018). Therefore, we used two categories of depth zones: ≤70 m and > 70 m. If Kruskal-Wallis tests showed that weights did not differ by lake or depth, the average weight for a species was calculated as an average of all lakes and depths. If Kruskal-Wallis tests showed significant separation (α < 0.05) by lake or depth, then means were calculated for each group and we also compared the group means. Individuals in different lakes or depth zones were combined if the mean difference between most groups was less than 25%, even when Kruskal-Wallis tests were significant because small differences were likely not biologically significant. Oligochaete fragments for finer taxonomic units were reported separately from oligochaete species because it was rarely apparent which species the fragments came from. Mean individual wet weights were calculated for a total of 187 groupings within taxonomic units (data file “IndividualWeights_AllData.csv”). For 117 taxonomic units, weights were calculated across all lakes, depths, and basins because weights were similar in all regions or because of small sample size, for seven taxonomic units, weights were calculated by lake, and for the rest summary statistics were calculated by both lake and depth zone. In addition, five species were considered as “special cases” where some areas were similar while others were not. For example, some species had similar weights in multiple lakes, thus those lakes were grouped together while other were kept separate. Dreissena rostriformis bugensis weights were calculated by lake and depth zone except for Lake Erie, where the western, central, and eastern basins were separated because previous research demonstrated that D. rostriformis bugensis size structure is drastically different in each of Lake Erie’s basins (Karatayev et al., 2021). Other special cases were: Heterotrissocladius marcidus group (Huron, Michigan, and Ontario were similar and grouped together, while mean weight in Lake Superior was different), Pisidium spp. (grouped as Ontario/Michigan, Erie, and Huron/Superior), Unidentified Chironomidae (Lake Erie was separated and all other lakes were grouped together), and Spirosperma ferox (Lake Erie was separated and all other lakes were grouped together). To calculate mean individual weights for commonly reported larger taxonomic groups (e.g., Oligochaeta, Chironomidae), we combined species or taxonomic units that belonged to this group (see “SpeciesList.csv” for information on groupings). Summary statistics were calculated on the mean individual weight for all individuals within a group in a given sample, i.e., total biomass for a given group was divided by total density for that group, repeated for each sample. Results are given for each major group as a mean/minimum/maximum for each lake, and for each depth zone within each lake as groups are often made up of different species with different body sizes in each lake and depth zone. Because densities of oligochaetes were counted based on the number of oligochaetes with heads in a sample (excluding fragments), but the fragments were weighed to calculate biomass, the mean individual weight for oligochaetes within a sample was calculated by dividing the weight of all oligochaetes (including fragments) in a sample by the number of oligochaetes (not including fragments). Calculations of mean individual weight by major group were performed both by lake and lake plus depth zone (data file “IndividualWeights_MajorGroups.csv”). Summary statistics were reported for 14 major taxa and were broken down by depth zone when sample size was sufficient (data file “IndividualWeights_MajorGroups.csv”). REFERENCES Burlakova, L. E., Barbiero, R. P., Karatayev, A. Y., Daniel, S. E., Hinchey, E. K., & Warren, G. J. (2018). The benthic community of the Laurentian Great Lakes: Analysis of spatial gradients and temporal trends from 1998 to 2014. Journal of Great Lakes Research, 44(4), 600–617. https://doi.org/10.1016/j.jglr.2018.04.008 Cook, D. G., & Johnson, M. G. (1974). Benthic Macroinvertebrates of the St. Lawrence Great Lakes.

  7. Prediction error (PE) as the difference between achieved and formula...

    • plos.figshare.com
    xls
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Achim Langenbucher; Nóra Szentmáry; Alan Cayless; Jascha Wendelstein; Peter Hoffmann (2023). Prediction error (PE) as the difference between achieved and formula predicted spherical equivalent for 8 different statistical metrics of formula constant optimisation and various formulae under test. [Dataset]. http://doi.org/10.1371/journal.pone.0267352.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Achim Langenbucher; Nóra Szentmáry; Alan Cayless; Jascha Wendelstein; Peter Hoffmann
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Prediction error (PE) as the difference between achieved and formula predicted spherical equivalent for 8 different statistical metrics of formula constant optimisation and various formulae under test.

  8. EPIC Ocean Surface PAR 1 Product V02 - Dataset - NASA Open Data Portal

    • data.nasa.gov
    Updated Jun 1, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nasa.gov (2025). EPIC Ocean Surface PAR 1 Product V02 - Dataset - NASA Open Data Portal [Dataset]. https://data.nasa.gov/dataset/epic-ocean-surface-par-1-product-v02
    Explore at:
    Dataset updated
    Jun 1, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    EPIC Ocean Surface PAR The EPIC observations of the Earth’s surface lit by the Sun made 13 times during the day in spectral bands centered on 443, 551, and 680 nm are used to estimate daily mean photosynthetically available radiation (PAR) at the ice-free ocean surface. PAR is defined as the quantum energy flux from the Sun in the 400-700 nm range. Daily mean PAR is the 24-hour averaged planar flux in that spectral range reaching the surface. It is expressed in E.m-2.d-1 (Einstein per meter squared per day). The factor required to convert E.m-2 d-1 units to mW.cm-2.μm-1 units is equal to 0.838 to an inaccuracy of a few percent regardless of meteorological conditions. The EPIC daily mean PAR product is generated on Plate Carrée (equal-angle) grid with 18.4 km resolution at the equator and on 18.4 km equal-area grid, i.e., the product is compatible with Ocean Biology Processing Group ocean color products.The EPIC PAR algorithm uses a budget approach, in which the solar irradiance reaching the surface is obtained by subtracting from the irradiance arriving at the top of the atmosphere (known) the irradiance reflected to space (estimated from the EPIC Level 1b radiance data), taking into account atmospheric transmission (modeled). Clear and cloudy regions within a pixel do not need to be distinguished, which dismisses the need for often-arbitrary assumptions about cloudiness distribution and is therefore adapted to the relatively large EPIC pixels. A daily mean PAR is estimated on the source grid for each EPIC instantaneous daytime observation, assuming no cloudiness change during the day, and the individual estimates are remapped and weight-averaged using the cosine of the Sun zenith angle. In the computations, wind speed, surface pressure, and water vapor amount are extracted from NECP Reanalysis 2 data, aerosol optical thickness and angstrom coefficient fromMERRA-2 data, and ozone amount from EPIC Level 2 data. Areas contaminated by sun glint are excluded using a threshold on sun glint reflectance calculated using wind data. Ice masking is based on NSIDC near real time ice fraction data. Details about the algorithm are given in Frouinet al., (2018). Figure A1 gives an example of EPIC daily mean PAR product. Date is March 20, 2018(equinox); land is in black and sea ice in white. Values range from a few E.m-2.d-1at high latitudes to about 58 E.m-2.d-1 at equatorial and tropical latitudes, with atmospheric perturbances modulating the surface PAR field especially at middle latitudes. The EPIC ocean surface PAR products are available at the Atmospheric Science Data Center (ASDC) at NASA Langley Research Center: https://asdc.larc.nasa.gov. 4. Reference Robert Frouin, Jing Tan, Didier Ramon, Bryan Franz, Hiroshi Murakami, 2018: Estimating photosynthetically available radiation at the ocean surface from EPIC/DSCOVR data, Proc. SPIE 10778, Remote Sensing of the Open and Coastal Ocean and Inland Waters, 1077806 (24 October 2018); doi: 10.1117/12.2501675. Changes from version 1 1) Algorithm (consistent with PACE) Updated the calculation of atmospheric reflectance, gaseous transmittance, and atmospheric transmittance using LUTs method so that calculations are accurate at high Sun and view zenith angles; Updated the calculation of surface albedo (based on Jin et al., 2011); Updated the calculation of cloud/surface layer albedo. 2)Ancillary data Changed the sources of the ancillary data including wind speed, surface pressure, and water vapor from NCEP to MERRA2; Added cloud fraction from MERRA2, which is needed for computing direct/diffuse ratio hence surface albedo.

  9. o

    Global land surface dataset of Heating and Cooling Degree Days from a...

    • ora.ox.ac.uk
    zip
    Updated Jan 1, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lizana, J; Miranda, N D; Sparrow, S N; Wallom, D C H; Khosla, R; McCulloch, M (2024). Global land surface dataset of Heating and Cooling Degree Days from a bias-corrected HadAM4-based temperature ensemble under 1.0ºC, 1.5ºC, and 2.0ºC climate scenarios. [Dataset]. http://doi.org/10.5287/ora-w4qpqy522
    Explore at:
    zip(4461458)Available download formats
    Dataset updated
    Jan 1, 2024
    Dataset provided by
    University of Oxford
    Authors
    Lizana, J; Miranda, N D; Sparrow, S N; Wallom, D C H; Khosla, R; McCulloch, M
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains global gridded maps of Heating Degree Days (HDD) and Cooling Degree Days (CDD) for three climate scenarios: a historical scenario corresponding to a global mean temperature rise of 1.0°C above pre-industrial levels (based on observations from 2006 to 2016), and two future climate projections for global mean temperature increases of 1.5°C and 2.0°C, respectively, regardless of when these thresholds are reached. HDD and CDD are widely used indicators to measure how much the mean temperature exceeds a reference temperature each day over a given period. They are widely used indicators to examine global temperature-related climate and quantify heating and cooling demand.

    Five different maps of HDD and CDD are available for each scenario as NetCDF V4 files (*.nc). These maps relate to different annual statistical indices calculated using 70 climate simulations over a 10-year period: mean, median, 10th percentile, 90th percentile, and standard deviation. The novelty of this dataset lies in the combination of two factors: the representation of global mean temperature rise scenarios for 1.5°C and 2.0°C globally, regardless of when these occur; and the bias-corrected global climate dataset used to calculate HDD and CDD, which involves a large ensemble size at a high global spatio-temporal resolution.

    Methods:

    The global gridded statistical maps of HDD and CDD were calculated considering 18°C as the baseline temperature. First, the annual HDD and CDD were calculated for each simulated year of each scenario at all geographic locations (a total of 700 simulated years per scenario). Then, the statistical indices across this variability were obtained. Global gridded maps have a spatial resolution of 0.833° x 0.556° (longitude x latitude) over the land surface.

    Climate data used:

    These global gridded maps of CDD and HDD were calculated using bias-corrected global climate simulations for mean temperature generated using the HadAM4 Atmosphere-only General Circulation Model (AGCM) from the UK Met Office Hadley Centre. Each scenario involved an ensemble of 70 individual members with 6-hourly mean temperatures at a horizontal resolution of 0.833 longitude and 0.556 latitude for a 10-year period (700 runs per scenario), aiming to ensure internal climate variability. These simulation experiments were run within the climateprediction.net (CPDN) climate simulation environment, using the Berkeley Open Infrastructure for Network Computing (BOINC) framework to distribute a large number of individual computational tasks. This system utilises the computational power of publicly volunteered computers that are globally distributed. The bias-corrected global climate dataset used to calculate these CDD and HDD maps is available at:

    Lizana, J.; Miranda, N.D.; Sparrow, S.; Zachau-Walker, M.; Watson, P.; Wallom, D.C.H.; McCulloch, M. (2023): Large ensemble of global mean temperatures: 6-hourly HadAM4 model run data using the Climateprediction.net platform. NERC EDS Centre for Environmental Data Analysis, 28 June 2023. doi:10.5285/9c41e3aa67024bbdad796290a861e968

  10. a

    Seattle Neighborhood Profiles King County and Seattle Medians

    • hub.arcgis.com
    • data.seattle.gov
    • +1more
    Updated Mar 9, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    City of Seattle ArcGIS Online (2024). Seattle Neighborhood Profiles King County and Seattle Medians [Dataset]. https://hub.arcgis.com/datasets/09269446ae2044da9ec7e22011473b6b
    Explore at:
    Dataset updated
    Mar 9, 2024
    Dataset authored and provided by
    City of Seattle ArcGIS Online
    Area covered
    King County, Seattle
    Description

    Table from the American Community Survey (ACS) 5-year series for King County and City of Seattle median values for a variety of topics including age, gross rent, monthly owner costs, family and nonfamily incomes, earnings. Includes the margin of error for the values.Table created for and used in the Neighborhood Profiles application.Vintages: 2010, 2015, 2020, 2023ACS Table(s): B01002, B25064, B25088, B19013, B19113, B19202, B20017Data downloaded from: Census Bureau's Explore Census Data The United States Census Bureau's American Community Survey (ACS):About the SurveyGeography & ACSTechnical DocumentationNews & UpdatesThis ready-to-use layer can be used within ArcGIS Pro, ArcGIS Online, its configurable apps, dashboards, Story Maps, custom apps, and mobile apps. Data can also be exported for offline workflows. Please cite the Census and ACS when using this data.Data Note from the Census:Data are based on a sample and are subject to sampling variability. The degree of uncertainty for an estimate arising from sampling variability is represented through the use of a margin of error. The value shown here is the 90 percent margin of error. The margin of error can be interpreted as providing a 90 percent probability that the interval defined by the estimate minus the margin of error and the estimate plus the margin of error (the lower and upper confidence bounds) contains the true value. In addition to sampling variability, the ACS estimates are subject to nonsampling error (for a discussion of nonsampling variability, see Accuracy of the Data). The effect of nonsampling error is not represented in these tables.Data Processing Notes:Boundaries come from the US Census TIGER geodatabases, specifically, the National Sub-State Geography Database (named tlgdb_(year)_a_us_substategeo.gdb). Boundaries are updated at the same time as the data updates (annually), and the boundary vintage appropriately matches the data vintage as specified by the Census. These are Census boundaries with water and/or coastlines erased for cartographic and mapping purposes. For census tracts, the water cutouts are derived from a subset of the 2020 Areal Hydrography boundaries offered by TIGER. Water bodies and rivers which are 50 million square meters or larger (mid to large sized water bodies) are erased from the tract level boundaries, as well as additional important features. For state and county boundaries, the water and coastlines are derived from the coastlines of the 2020 500k TIGER Cartographic Boundary Shapefiles. These are erased to more accurately portray the coastlines and Great Lakes. The original AWATER and ALAND fields are still available as attributes within the data table (units are square meters). The States layer contains 52 records - all US states, Washington D.C., and Puerto RicoCensus tracts with no population that occur in areas of water, such as oceans, are removed from this data service (Census Tracts beginning with 99).Percentages and derived counts, and associated margins of error, are calculated values (that can be identified by the "_calc_" stub in the field name), and abide by the specifications defined by the American Community Survey.Field alias names were created based on the Table Shells file available from the American Community Survey Summary File Documentation page.Negative values (e.g., -4444...) have been set to null, with the exception of -5555... which has been set to zero. These negative values exist in the raw API data to indicate the following situations:The margin of error column indicates that either no sample observations or too few sample observations were available to compute a standard error and thus the margin of error. A statistical test is not appropriate.Either no sample observations or too few sample observations were available to compute an estimate, or a ratio of medians cannot be calculated because one or both of the median estimates falls in the lowest interval or upper interval of an open-ended distribution.The median falls in the lowest interval of an open-ended distribution, or in the upper interval of an open-ended distribution. A statistical test is not appropriate.The estimate is controlled. A statistical test for sampling variability is not appropriate.The data for this geographic area cannot be displayed because the number of sample cases is too small.

  11. a

    OCCUPATION BY MEDIAN EARNINGS IN THE PAST 12 MONTHS (B24021)

    • hub.arcgis.com
    • data.seattle.gov
    • +1more
    Updated Aug 30, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    City of Seattle ArcGIS Online (2023). OCCUPATION BY MEDIAN EARNINGS IN THE PAST 12 MONTHS (B24021) [Dataset]. https://hub.arcgis.com/maps/SeattleCityGIS::occupation-by-median-earnings-in-the-past-12-months-b24021
    Explore at:
    Dataset updated
    Aug 30, 2023
    Dataset authored and provided by
    City of Seattle ArcGIS Online
    Description

    Table from the American Community Survey (ACS) B24021 occupation by median earnings. These are multiple, nonoverlapping vintages of the 5-year ACS estimates of population and housing attributes starting in 2010 shown by the corresponding census tract vintage. Also includes the most recent release annually.King County, Washington census tracts with nonoverlapping vintages of the 5-year American Community Survey (ACS) estimates starting in 2010. Vintage identified in the "ACS Vintage" field.The census tract boundaries match the vintage of the ACS data (currently 2010 and 2020) so please note the geographic changes between the decades. Tracts have been coded as being within the City of Seattle as well as assigned to neighborhood groups called "Community Reporting Areas". These areas were created after the 2000 census to provide geographically consistent neighborhoods through time for reporting U.S. Census Bureau data. This is not an attempt to identify neighborhood boundaries as defined by neighborhoods themselves.Vintages: 2010, 2015, 2020, 2021, 2022, 2023ACS Table(s): B24021Data downloaded from: Census Bureau's Explore Census Data The United States Census Bureau's American Community Survey (ACS):About the SurveyGeography & ACSTechnical DocumentationNews & UpdatesThis ready-to-use layer can be used within ArcGIS Pro, ArcGIS Online, its configurable apps, dashboards, Story Maps, custom apps, and mobile apps. Data can also be exported for offline workflows. Please cite the Census and ACS when using this data.Data Note from the Census:Data are based on a sample and are subject to sampling variability. The degree of uncertainty for an estimate arising from sampling variability is represented through the use of a margin of error. The value shown here is the 90 percent margin of error. The margin of error can be interpreted as providing a 90 percent probability that the interval defined by the estimate minus the margin of error and the estimate plus the margin of error (the lower and upper confidence bounds) contains the true value. In addition to sampling variability, the ACS estimates are subject to nonsampling error (for a discussion of nonsampling variability, see Accuracy of the Data). The effect of nonsampling error is not represented in these tables.Data Processing Notes:Boundaries come from the US Census TIGER geodatabases, specifically, the National Sub-State Geography Database (named tlgdb_(year)_a_us_substategeo.gdb). Boundaries are updated at the same time as the data updates (annually), and the boundary vintage appropriately matches the data vintage as specified by the Census. These are Census boundaries with water and/or coastlines erased for cartographic and mapping purposes. For census tracts, the water cutouts are derived from a subset of the 2020 Areal Hydrography boundaries offered by TIGER. Water bodies and rivers which are 50 million square meters or larger (mid to large sized water bodies) are erased from the tract level boundaries, as well as additional important features. For state and county boundaries, the water and coastlines are derived from the coastlines of the 2020 500k TIGER Cartographic Boundary Shapefiles. These are erased to more accurately portray the coastlines and Great Lakes. The original AWATER and ALAND fields are still available as attributes within the data table (units are square meters). The States layer contains 52 records - all US states, Washington D.C., and Puerto RicoCensus tracts with no population that occur in areas of water, such as oceans, are removed from this data service (Census Tracts beginning with 99).Percentages and derived counts, and associated margins of error, are calculated values (that can be identified by the "_calc_" stub in the field name), and abide by the specifications defined by the American Community Survey.Field alias names were created based on the Table Shells file available from the American Community Survey Summary File Documentation page.Negative values (e.g., -4444...) have been set to null, with the exception of -5555... which has been set to zero. These negative values exist in the raw API data to indicate the following situations:The margin of error column indicates that either no sample observations or too few sample observations were available to compute a standard error and thus the margin of error. A statistical test is not appropriate.Either no sample observations or too few sample observations were available to compute an estimate, or a ratio of medians cannot be calculated because one or both of the median estimates falls in the lowest interval or upper interval of an open-ended distribution.The median falls in the lowest interval of an open-ended distribution, or in the upper interval of an open-ended distribution. A statistical test is not appropriate.The estimate is controlled. A statistical test for sampling variability is not appropriate.The data for this geographic area cannot be displayed because the number of sample cases is too small.

  12. M

    Vital Signs: List Rents – by city

    • open-data-demo.mtc.ca.gov
    • data.bayareametro.gov
    csv, xlsx, xml
    Updated Jan 19, 2017
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    real Answers (2017). Vital Signs: List Rents – by city [Dataset]. https://open-data-demo.mtc.ca.gov/dataset/Vital-Signs-List-Rents-by-city/vpmm-yh3p/about
    Explore at:
    xlsx, xml, csvAvailable download formats
    Dataset updated
    Jan 19, 2017
    Dataset authored and provided by
    real Answers
    Description

    VITAL SIGNS INDICATOR List Rents (EC9)

    FULL MEASURE NAME List Rents

    LAST UPDATED October 2016

    DESCRIPTION List rent refers to the advertised rents for available rental housing and serves as a measure of housing costs for new households moving into a neighborhood, city, county or region.

    DATA SOURCE real Answers (1994 – 2015) no link

    Zillow Metro Median Listing Price All Homes (2010-2016) http://www.zillow.com/research/data/

    CONTACT INFORMATION vitalsigns.info@mtc.ca.gov

    METHODOLOGY NOTES (across all datasets for this indicator) List rents data reflects median rent prices advertised for available apartments rather than median rent payments; more information is available in the indicator definition above. Regional and local geographies rely on data collected by real Answers, a research organization and database publisher specializing in the multifamily housing market. real Answers focuses on collecting longitudinal data for individual rental properties through quarterly surveys. For the Bay Area, their database is comprised of properties with 40 to 3,000+ housing units. Median list prices most likely have an upward bias due to the exclusion of smaller properties. The bias may be most extreme in geographies where large rental properties represent a small portion of the overall rental market. A map of the individual properties surveyed is included in the Local Focus section.

    Individual properties surveyed provided lower- and upper-bound ranges for the various types of housing available (studio, 1 bedroom, 2 bedroom, etc.). Median lower- and upper-bound prices are determined across all housing types for the regional and county geographies. The median list price represented in Vital Signs is the average of the median lower- and upper-bound prices for the region and counties. Median upper-bound prices are determined across all housing types for the city geographies. The median list price represented in Vital Signs is the median upper-bound price for cities. For simplicity, only the mean list rent is displayed for the individual properties. The metro areas geography rely upon Zillow data, which is the median price for rentals listed through www.zillow.com during the month. Like the real Answers data, Zillow's median list prices most likely have an upward bias since small properties are underrepresented in Zillow's listings. The metro area data for the Bay Area cannot be compared to the regional Bay Area data. Due to afore mentioned data limitations, this data is suitable for analyzing the change in list rents over time but not necessarily comparisons of absolute list rents. Metro area boundaries reflects today’s metro area definitions by county for consistency, rather than historical metro area boundaries.

    Due to the limited number of rental properties surveyed, city-level data is unavailable for Atherton, Belvedere, Brisbane, Calistoga, Clayton, Cloverdale, Cotati, Fairfax, Half Moon Bay, Healdsburg, Hillsborough, Los Altos Hills, Monte Sereno, Moranga, Oakley, Orinda, Portola Valley, Rio Vista, Ross, San Anselmo, San Carlos, Saratoga, Sebastopol, Windsor, Woodside, and Yountville.

    Inflation-adjusted data are presented to illustrate how rents have grown relative to overall price increases; that said, the use of the Consumer Price Index does create some challenges given the fact that housing represents a major chunk of consumer goods bundle used to calculate CPI. This reflects a methodological tradeoff between precision and accuracy and is a common concern when working with any commodity that is a major component of CPI itself. Percent change in inflation-adjusted median is calculated with respect to the median price from the fourth quarter or December of the base year.

  13. Annual Cooling Degree Days - Projections (12km)

    • climatedataportal.metoffice.gov.uk
    Updated May 17, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Met Office (2023). Annual Cooling Degree Days - Projections (12km) [Dataset]. https://climatedataportal.metoffice.gov.uk/datasets/annual-cooling-degree-days-projections-12km
    Explore at:
    Dataset updated
    May 17, 2023
    Dataset authored and provided by
    Met Officehttp://www.metoffice.gov.uk/
    Area covered
    Description

    [Updated 28/01/25 to fix an issue in the ‘Lower’ values, which were not fully representing the range of uncertainty. ‘Median’ and ‘Higher’ values remain unchanged. The size of the change varies by grid cell and fixed period/global warming levels but the average difference between the 'lower' values before and after this update is 1.2.]What does the data show? A Cooling Degree Day (CDD) is a day in which the average temperature is above 22°C. It is the number of degrees above this threshold that counts as a Coolin Degree Day. For example if the average temperature for a specific day is 22.5°C, this would contribute 0.5 Cooling Degree Days to the annual sum, alternatively an average temperature of 27°C would contribute 5 Cooling Degree Days. Given the data shows the annual sum of Cooling Degree Days, this value can be above 365 in some parts of the UK.Annual Cooling Degree Days is calculated for two baseline (historical) periods 1981-2000 (corresponding to 0.51°C warming) and 2001-2020 (corresponding to 0.87°C warming) and for global warming levels of 1.5°C, 2.0°C, 2.5°C, 3.0°C, 4.0°C above the pre-industrial (1850-1900) period. This enables users to compare the future number of CDD to previous values.What are the possible societal impacts?Cooling Degree Days indicate the energy demand for cooling due to hot days. A higher number of CDD means an increase in power consumption for cooling and air conditioning, therefore this index is useful for predicting future changes in energy demand for cooling.In practice, this varies greatly throughout the UK, depending on personal thermal comfort levels and building designs, so these results should be considered as rough estimates of overall demand changes on a large scale.What is a global warming level?Annual Cooling Degree Days are calculated from the UKCP18 regional climate projections using the high emissions scenario (RCP 8.5) where greenhouse gas emissions continue to grow. Instead of considering future climate change during specific time periods (e.g. decades) for this scenario, the dataset is calculated at various levels of global warming relative to the pre-industrial (1850-1900) period. The world has already warmed by around 1.1°C (between 1850–1900 and 2011–2020), whilst this dataset allows for the exploration of greater levels of warming. The global warming levels available in this dataset are 1.5°C, 2°C, 2.5°C, 3°C and 4°C. The data at each warming level was calculated using a 21 year period. These 21 year periods are calculated by taking 10 years either side of the first year at which the global warming level is reached. This time will be different for different model ensemble members. To calculate the value for the Annual Cooling Degree Days, an average is taken across the 21 year period. Therefore, the Annual Cooling Degree Days show the number of cooling degree days that could occur each year, for each given level of warming. We cannot provide a precise likelihood for particular emission scenarios being followed in the real world future. However, we do note that RCP8.5 corresponds to emissions considerably above those expected with current international policy agreements. The results are also expressed for several global warming levels because we do not yet know which level will be reached in the real climate as it will depend on future greenhouse emission choices and the sensitivity of the climate system, which is uncertain. Estimates based on the assumption of current international agreements on greenhouse gas emissions suggest a median warming level in the region of 2.4-2.8°C, but it could either be higher or lower than this level.What are the naming conventions and how do I explore the data?This data contains a field for each global warming level and two baselines. They are named ‘CDD’ (Cooling Degree Days), the warming level or baseline, and 'upper' 'median' or 'lower' as per the description below. E.g. 'CDD 2.5 median' is the median value for the 2.5°C projection. Decimal points are included in field aliases but not field names e.g. 'CDD 2.5 median' is 'CDD_25_median'. To understand how to explore the data, see this page: https://storymaps.arcgis.com/stories/457e7a2bc73e40b089fac0e47c63a578Please note, if viewing in ArcGIS Map Viewer, the map will default to ‘CDD 2.0°C median’ values.What do the ‘median’, ‘upper’, and ‘lower’ values mean?Climate models are numerical representations of the climate system. To capture uncertainty in projections for the future, an ensemble, or group, of climate models are run. Each ensemble member has slightly different starting conditions or model set-ups. Considering all of the model outcomes gives users a range of plausible conditions which could occur in the future. For this dataset, the model projections consist of 12 separate ensemble members. To select which ensemble members to use, Annual Cooling Degree Days were calculated for each ensemble member and they were then ranked in order from lowest to highest for each location. The ‘lower’ fields are the second lowest ranked ensemble member. The ‘upper’ fields are the second highest ranked ensemble member. The ‘median’ field is the central value of the ensemble.This gives a median value, and a spread of the ensemble members indicating the range of possible outcomes in the projections. This spread of outputs can be used to infer the uncertainty in the projections. The larger the difference between the lower and upper fields, the greater the uncertainty.‘Lower’, ‘median’ and ‘upper’ are also given for the baseline periods as these values also come from the model that was used to produce the projections. This allows a fair comparison between the model projections and recent past. Useful linksThis dataset was calculated following the methodology in the ‘Future Changes to high impact weather in the UK’ report and uses the same temperature thresholds as the 'State of the UK Climate' report.Further information on the UK Climate Projections (UKCP).Further information on understanding climate data within the Met Office Climate Data Portal.

  14. d

    Data from: Acoustic multi-frequency indicator of four major groups on a...

    • search.dataone.org
    • doi.pangaea.de
    • +1more
    Updated Jan 5, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Trenkel, Verena M; Berger, Laurent (2018). Acoustic multi-frequency indicator of four major groups on a spatial grid in the Bay of Biscay [Dataset]. http://doi.org/10.1594/PANGAEA.833957
    Explore at:
    Dataset updated
    Jan 5, 2018
    Dataset provided by
    PANGAEA Data Publisher for Earth and Environmental Science
    Authors
    Trenkel, Verena M; Berger, Laurent
    Time period covered
    May 2, 2006 - May 24, 2010
    Area covered
    Description

    The first data set contains the mean and cofficient of variation (standard deviation divided by mean) of a multi-frequency indicator I derived from ER60 acoustic information collected at five frequencies (18, 38, 70, 120, and 200 kHz) in the Bay of Biscay in May of the years 2006, 2008, 2009 and 2010 (Pelgas surveys). The multi-frequency indicator was first calculated per voxel (20 m long × 5 m deep sampling unit) and then averaged on a spatial grid (approx. 20 nm × 20 nm) for five 5-m depth layers in the surface waters (10-15m, 15-20m, 20-25m, 25-30m below sea surface); there are missing values in particular in the shallowest layer. The second data set provides for each grid cell and depth layer the proportion of voxels for which the multi-frequency indicator I was indicative of a certain group of organisms. For this the following interpretation was used: I < 0.39 swim bladder fish or large gas bubbles, I = 0.39-0.58 small resonant bubbles present in gas bearing organisms such as larval fish and phytoplankton, I = 0.7-0.8 fluidlike zooplankton such as copepods and euphausiids, and I > 0.8 mackerel. These proportions can be interpreted as a relative abundance index for each of the four organism groups.

  15. Data from: Model-based changes in global annual mean surface temperature...

    • doi.pangaea.de
    • search.dataone.org
    zip
    Updated Dec 1, 2015
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Peter Köhler; Bas de Boer; Anna S von der Heydt; Lennert Bastiaan Stap; Roderik S W van de Wal (2015). Model-based changes in global annual mean surface temperature change (Delta T_g) and radiative forcing due to land ice albedo changes (Delta R_[LI]) over the last 5 Myr, supplementary material [Dataset]. http://doi.org/10.1594/PANGAEA.855449
    Explore at:
    zipAvailable download formats
    Dataset updated
    Dec 1, 2015
    Dataset provided by
    PANGAEA
    Authors
    Peter Köhler; Bas de Boer; Anna S von der Heydt; Lennert Bastiaan Stap; Roderik S W van de Wal
    License

    Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
    License information was derived automatically

    Description

    It is still an open question how equilibrium warming in response to increasing radiative forcing – the specific equilibrium climate sensitivity S – depends on background climate. We here present palaeodata-based evidence on the state dependency of S, by using CO2 proxy data together with a 3-D ice-sheet-model-based reconstruction of land ice albedo over the last 5 million years (Myr). We find that the land ice albedo forcing depends non-linearly on the background climate, while any non-linearity of CO2 radiative forcing depends on the CO2 data set used. This non-linearity has not, so far, been accounted for in similar approaches due to previously more simplistic approximations, in which land ice albedo radiative forcing was a linear function of sea level change. The latitudinal dependency of ice-sheet area changes is important for the non-linearity between land ice albedo and sea level. In our set-up, in which the radiative forcing of CO2 and of the land ice albedo (LI) is combined, we find a state dependence in the calculated specific equilibrium climate sensitivity, S[CO2,LI], for most of the Pleistocene (last 2.1 Myr). During Pleistocene intermediate glaciated climates and interglacial periods, S[CO2,LI] is on average ~ 45 % larger than during Pleistocene full glacial conditions. In the Pliocene part of our analysis (2.6–5 Myr BP) the CO2 data uncertainties prevent a well-supported calculation for S[CO2,LI], but our analysis suggests that during times without a large land ice area in the Northern Hemisphere (e.g. before 2.82 Myr BP), the specific equilibrium climate sensitivity, S[CO2,LI], was smaller than during interglacials of the Pleistocene. We thus find support for a previously proposed state change in the climate system with the widespread appearance of northern hemispheric ice sheets. This study points for the first time to a so far overlooked non-linearity in the land ice albedo radiative forcing, which is important for similar palaeodata-based approaches to calculate climate sensitivity. However, the implications of this study for a suggested warming under CO2 doubling are not yet entirely clear since the details of necessary corrections for other slow feedbacks are not fully known and the uncertainties that exist in the ice-sheet simulations and global temperature reconstructions are large.

  16. Optimised formula constants for the SRKT, the Hoffer Q, the Holladay 1,...

    • plos.figshare.com
    xls
    Updated Jun 3, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Achim Langenbucher; Nóra Szentmáry; Alan Cayless; Jascha Wendelstein; Peter Hoffmann (2023). Optimised formula constants for the SRKT, the Hoffer Q, the Holladay 1, Haigis (with optimised a0 and preset values a1 = 0.4 / a2 = 0.1, Haigis1; and with optimised a0 / a1 / a2 constant triplet, Haigis3), and Castrop formula. [Dataset]. http://doi.org/10.1371/journal.pone.0267352.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 3, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Achim Langenbucher; Nóra Szentmáry; Alan Cayless; Jascha Wendelstein; Peter Hoffmann
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Optimised formula constants for the SRKT, the Hoffer Q, the Holladay 1, Haigis (with optimised a0 and preset values a1 = 0.4 / a2 = 0.1, Haigis1; and with optimised a0 / a1 / a2 constant triplet, Haigis3), and Castrop formula.

  17. Crypto Currencies

    • kaggle.com
    zip
    Updated Aug 15, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Joakim Arvidsson (2023). Crypto Currencies [Dataset]. https://www.kaggle.com/datasets/joebeachcapital/crypto-currencies
    Explore at:
    zip(4101771 bytes)Available download formats
    Dataset updated
    Aug 15, 2023
    Authors
    Joakim Arvidsson
    License

    http://opendatacommons.org/licenses/dbcl/1.0/http://opendatacommons.org/licenses/dbcl/1.0/

    Description

    Description A dataset containing the closing prices for the last day. Data source: https://coinmetrics.io.

    Summary

    The Data is pulled from here: https://coinmetrics.io/data-downloads/

    Data sources and methodology For UTXO coins daily on-chain transaction volume is calculated as the sum of all transaction outputs belonging to the blocks mined on the given day. Known “change” outputs are not included. Estimation difficulties remain and the measure is imprecise. We discuss this here. Methodology behind adjusted transaction volume figures is described in this post. XRP transaction volume includes only transfers of XRP tokens.

    Transaction count figure doesn’t include coinbase and coinstake transactions.

    Active addresses is the number of unique sending and receiving addresses participating in transactions on the given day. For Monero, we report an upper bound for this metric (calculated as sum of input and output count), as the precise value is unknowable due to stealth addresses technology.

    Payment count for UTXO coins is defined as sum of outputs’ count minus one for each transaction. We assume that transaction with N outputs pays to N – 1 addresses and the last N-th output is change. Transactions with only one output do not contribute to payment count, as they are likely to be a self-churn. Payment count for smart contract assets such as ETH or LSK is calculated as the amount of transfer transactions (i.e. contract creation, invocation, destruction transactions are not included). Payment count for Ripple is the amount of XRP token transfers.

    NEO and GAS transaction count figures reflect the amount of transactions that have at least one output of given asset type. If transaction sends both NEO and GAS, it will be included in transaction count for both assets. Fees figure is denominated in GAS and calculated by summing the fees of all transactions that have at least one output of a given asset type.

    Ripple data includes only transactions of Payment type that transfer XRP tokens.

    Stellar transaction volume data covers only operations of Payment and CreateAccount types that transfer XLM tokens. Transaction count is the number of transactions that include at least one operation of aforementioned types. Lumens inflation data is currently unavailable.

    XEM data includes only transactions of “Transfer” type.

    Zcash figures for on-chain volume and transaction count reflect data collected for transparent transactions only. In the last month, 9.1% (14/06/18) of ZEC transactions were shielded, and these are excluded from the analysis due to their private nature. Thus transaction volume figures in reality are higher than the estimate presented here, and NVT and exchange to transaction value lower. Data on shielded and transparent transactions can be found here and here.

    Monero transaction volume is impossible to calculate due to RingCT which hides transaction amounts.

    EOS and TRX transaction volume figures include only transactions of transfer type. Median transaction value for EOS and TRX is actually median transfer value.

    WAVES transaction volume figure includes only transactions of transfer and mass transfer types. Median transaction value for WAVES is actually median value of WAVES token transfer.

    Price data All coins: coinmarketcap.com

    On-chain data BTC, BCH, LTC, DCR, DASH, ZEC, DOGE, PIVX, XVG, VTC, DGB, BTG, USDT, MAID: data collected from blockchains and aggregated by CM Python tools ETH and ERC20 tokens, ETC, XMR, XEM, ADA, LSK, NEO, GAS: data collected from blockchains by CM Haskell tools and aggregated by companion analytics scripts XRP: data collected from data.ripple.com by CM Haskell tools and aggregated by companion analytics scripts XLM: data collected from history.stellar.org by CM Haskell tools and aggregated by companion analytics scripts

  18. Habitat suitability (median) and standard deviation for each model and each...

    • plos.figshare.com
    • datasetcatalog.nlm.nih.gov
    xls
    Updated Sep 27, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Malte Hinsch; Jens Groß; Benjamin Burkhard (2024). Habitat suitability (median) and standard deviation for each model and each dataset for the entire study area of the Hannover Region. [Dataset]. http://doi.org/10.1371/journal.pone.0305731.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Sep 27, 2024
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Malte Hinsch; Jens Groß; Benjamin Burkhard
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Hanover Region
    Description

    Habitat suitability (median) and standard deviation for each model and each dataset for the entire study area of the Hannover Region.

  19. Optimised formula constants for the Hoffer Q (pACD), the Holladay 1 (SF),...

    • plos.figshare.com
    xls
    Updated Jun 18, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Achim Langenbucher; Nóra Szentmáry; Alan Cayless; Jascha Wendelstein; Peter Hoffmann (2023). Optimised formula constants for the Hoffer Q (pACD), the Holladay 1 (SF), Haigis (a0/a1/a2), and Castrop formula (C / H / R). [Dataset]. http://doi.org/10.1371/journal.pone.0282213.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 18, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Achim Langenbucher; Nóra Szentmáry; Alan Cayless; Jascha Wendelstein; Peter Hoffmann
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Formula constant optimisation was performed to minimise the sum of squared prediction errors PE. A) refers to the ‘classical’ formulae with standard nK/nC values, with B) the formula constants and nK/nC in the main part of the formula were varied for optimisation, with C) the formula constants and nK/nC in the main part of the formula were varied to minimise for PE and the PE trend error over corneal radius, and with D) a standard optimisation was performed using the nK/nC value from situation B) derived from the other dataset in terms of a cross-validation.

  20. Formula prediction error PE (difference of the SEQ measured after cataract...

    • plos.figshare.com
    xls
    Updated Jun 21, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Achim Langenbucher; Nóra Szentmáry; Alan Cayless; Jascha Wendelstein; Peter Hoffmann (2023). Formula prediction error PE (difference of the SEQ measured after cataract surgery minus the formula predicted SEQ) for the Hoffer Q (pACD), the Holladay 1 (SF), Haigis (a0/a1/a2), and Castrop formula (C / H / R). [Dataset]. http://doi.org/10.1371/journal.pone.0282213.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 21, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Achim Langenbucher; Nóra Szentmáry; Alan Cayless; Jascha Wendelstein; Peter Hoffmann
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    SD refers to the standard deviation, 2.5% quantile and 97.5% quantile to the lower and upper boundary of the 95% confidence interval, and IQR to the interquartile range as the difference between the 75% and the 25% quantile. Formula constant optimisation was performed to minimise the sum of squared prediction errors PE. Situation A) refers to the ‘classical’ formulae with standard nK/nC values, with situation B) the formula constants and nK/nC in the main part of the formula were varied for optimisation, with situation C) the formula constants and nK/nC in the main part of the formula were varied to minimise for PE and the PE trend error over corneal radius, and with situation D) a standard optimisation was performed using the nK/nC value from situation B) derived from the other dataset in terms of a cross-validation.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Achim Langenbucher; Nóra Szentmáry; Alan Cayless; Jascha Wendelstein; Peter Hoffmann (2023). Descriptive statistics of the dataset with mean, standard deviation (SD), median, and the lower (quantile 5%) and upper (quantile 95%) boundary of the 90% confidence interval. [Dataset]. http://doi.org/10.1371/journal.pone.0267352.t001
Organization logo

Descriptive statistics of the dataset with mean, standard deviation (SD), median, and the lower (quantile 5%) and upper (quantile 95%) boundary of the 90% confidence interval.

Related Article
Explore at:
xlsAvailable download formats
Dataset updated
Jun 14, 2023
Dataset provided by
PLOShttp://plos.org/
Authors
Achim Langenbucher; Nóra Szentmáry; Alan Cayless; Jascha Wendelstein; Peter Hoffmann
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

Descriptive statistics of the dataset with mean, standard deviation (SD), median, and the lower (quantile 5%) and upper (quantile 95%) boundary of the 90% confidence interval.

Search
Clear search
Close search
Google apps
Main menu