100+ datasets found
  1. Weather Data

    • kaggle.com
    zip
    Updated Jul 30, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Data Science Lovers (2025). Weather Data [Dataset]. https://www.kaggle.com/datasets/rohitgrewal/weather-data/data
    Explore at:
    zip(102960 bytes)Available download formats
    Dataset updated
    Jul 30, 2025
    Authors
    Data Science Lovers
    License

    http://opendatacommons.org/licenses/dbcl/1.0/http://opendatacommons.org/licenses/dbcl/1.0/

    Description

    📹Project Video available on YouTube - https://youtu.be/4hYOkHijtNw

    🖇️Connect with me on LinkedIn - https://www.linkedin.com/in/rohit-grewal

    The Weather Dataset is a time-series data set with per-hour information about the weather conditions at a particular location. It records Temperature, Dew Point Temperature, Relative Humidity, Wind Speed, Visibility, Pressure, and Conditions.

    This data is available as a CSV file. We have analysed this data using the Pandas library.

    Using this dataset, we answered multiple questions with Python in our Project.

    Q. 1) Find all the unique 'Wind Speed' values in the data.

    Q. 2) Find the number of times when the 'Weather is exactly Clear'.

    Q. 3) Find the number of times when the 'Wind Speed was exactly 4 km/h'.

    Q. 4) Find out all the Null Values in the data.

    Q. 5) Rename the column name 'Weather' of the dataframe to 'Weather Condition'.

    Q. 6) What is the mean 'Visibility' ?

    Q. 7) What is the Standard Deviation of 'Pressure' in this data?

    Q. 8) What is the Variance of 'Relative Humidity' in this data ?

    Q. 9) Find all instances when 'Snow' was recorded.

    Q. 10) Find all instances when 'Wind Speed is above 24' and 'Visibility is 25'.

    Q. 11) What is the Mean value of each column against each 'Weather Condition ?

    Q. 12) What is the Minimum & Maximum value of each column against each 'Weather Condition ?

    Q. 13) Show all the Records where Weather Condition is Fog.

    Q. 14) Find all instances when 'Weather is Clear' or 'Visibility is above 40'.

    Q. 15) Find all instances when : A. 'Weather is Clear' and 'Relative Humidity is greater than 50' or B. 'Visibility is above 40'

    These are the main Features/Columns available in the dataset :

    • Date/Time - The timestamp when the weather observation was recorded. Format: M/D/YYYY H:MM.

    • Temp_C - The air temperature in degrees Celsius at the time of observation.

    • Dew Point Temp_C - The temperature at which air becomes saturated with moisture (dew point), also measured in degrees Celsius.

    • Rel Hum_% - The relative humidity, expressed as a percentage (%), indicating how much moisture is in the air compared to the maximum it could hold at that temperature.

    • Wind Speed_km/h - The speed of the wind at the time of observation, measured in kilometers per hour.

    • Visibility_km - The distance one can clearly see, measured in kilometers. Lower values often indicate fog or precipitation.

    • Press_kPa - Atmospheric pressure at the time of observation, measured in kilopascals (kPa).

    • Weather - A text description of the observed weather conditions, such as "Fog", "Rain", or "Snow".

  2. d

    Data from: Best Management Practices Statistical Estimator (BMPSE) Version...

    • catalog.data.gov
    • data.usgs.gov
    Updated Nov 27, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2025). Best Management Practices Statistical Estimator (BMPSE) Version 1.2.0 [Dataset]. https://catalog.data.gov/dataset/best-management-practices-statistical-estimator-bmpse-version-1-2-0
    Explore at:
    Dataset updated
    Nov 27, 2025
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Description

    The Best Management Practices Statistical Estimator (BMPSE) version 1.2.0 was developed by the U.S. Geological Survey (USGS), in cooperation with the Federal Highway Administration (FHWA) Office of Project Delivery and Environmental Review to provide planning-level information about the performance of structural best management practices for decision makers, planners, and highway engineers to assess and mitigate possible adverse effects of highway and urban runoff on the Nation's receiving waters (Granato 2013, 2014; Granato and others, 2021). The BMPSE was assembled by using a Microsoft Access® database application to facilitate calculation of BMP performance statistics. Granato (2014) developed quantitative methods to estimate values of the trapezoidal-distribution statistics, correlation coefficients, and the minimum irreducible concentration (MIC) from available data. Granato (2014) developed the BMPSE to hold and process data from the International Stormwater Best Management Practices Database (BMPDB, www.bmpdatabase.org). Version 1.0 of the BMPSE contained a subset of the data from the 2012 version of the BMPDB; the current version of the BMPSE (1.2.0) contains a subset of the data from the December 2019 version of the BMPDB. Selected data from the BMPDB were screened for import into the BMPSE in consultation with Jane Clary, the data manager for the BMPDB. Modifications included identifying water quality constituents, making measurement units consistent, identifying paired inflow and outflow values, and converting BMPDB water quality values set as half the detection limit back to the detection limit. Total polycyclic aromatic hydrocarbons (PAH) values were added to the BMPSE from BMPDB data; they were calculated from individual PAH measurements at sites with enough data to calculate totals. The BMPSE tool can sort and rank the data, calculate plotting positions, calculate initial estimates, and calculate potential correlations to facilitate the distribution-fitting process (Granato, 2014). For water-quality ratio analysis the BMPSE generates the input files and the list of filenames for each constituent within the Graphical User Interface (GUI). The BMPSE calculates the Spearman’s rho (ρ) and Kendall’s tau (τ) correlation coefficients with their respective 95-percent confidence limits and the probability that each correlation coefficient value is not significantly different from zero by using standard methods (Granato, 2014). If the 95-percent confidence limit values are of the same sign, then the correlation coefficient is statistically different from zero. For hydrograph extension, the BMPSE calculates ρ and τ between the inflow volume and the hydrograph-extension values (Granato, 2014). For volume reduction, the BMPSE calculates ρ and τ between the inflow volume and the ratio of outflow to inflow volumes (Granato, 2014). For water-quality treatment, the BMPSE calculates ρ and τ between the inflow concentrations and the ratio of outflow to inflow concentrations (Granato, 2014; 2020). The BMPSE also calculates ρ between the inflow and the outflow concentrations when a water-quality treatment analysis is done. The current version (1.2.0) of the BMPSE also has the option to calculate urban-runoff quality statistics from inflows to BMPs by using computer code developed for the Highway Runoff Database (Granato and Cazenas, 2009;Granato, 2019). Granato, G.E., 2013, Stochastic empirical loading and dilution model (SELDM) version 1.0.0: U.S. Geological Survey Techniques and Methods, book 4, chap. C3, 112 p., CD-ROM https://pubs.usgs.gov/tm/04/c03 Granato, G.E., 2014, Statistics for stochastic modeling of volume reduction, hydrograph extension, and water-quality treatment by structural stormwater runoff best management practices (BMPs): U.S. Geological Survey Scientific Investigations Report 2014–5037, 37 p., http://dx.doi.org/10.3133/sir20145037. Granato, G.E., 2019, Highway-Runoff Database (HRDB) Version 1.1.0: U.S. Geological Survey data release, https://doi.org/10.5066/P94VL32J. Granato, G.E., and Cazenas, P.A., 2009, Highway-Runoff Database (HRDB Version 1.0)--A data warehouse and preprocessor for the stochastic empirical loading and dilution model: Washington, D.C., U.S. Department of Transportation, Federal Highway Administration, FHWA-HEP-09-004, 57 p. https://pubs.usgs.gov/sir/2009/5269/disc_content_100a_web/FHWA-HEP-09-004.pdf Granato, G.E., Spaetzel, A.B., and Medalie, L., 2021, Statistical methods for simulating structural stormwater runoff best management practices (BMPs) with the stochastic empirical loading and dilution model (SELDM): U.S. Geological Survey Scientific Investigations Report 2020–5136, 41 p., https://doi.org/10.3133/sir20205136

  3. T

    Turkey Liner Shipping Connectivity Index Maximum Value In 2004 100

    • tradingeconomics.com
    csv, excel, json, xml
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    TRADING ECONOMICS, Turkey Liner Shipping Connectivity Index Maximum Value In 2004 100 [Dataset]. https://tradingeconomics.com/turkey/liner-shipping-connectivity-index-maximum-value-in-2004--100-wb-data.html
    Explore at:
    csv, xml, excel, jsonAvailable download formats
    Dataset authored and provided by
    TRADING ECONOMICS
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jan 1, 1976 - Dec 31, 2025
    Area covered
    Türkiye
    Description

    Actual value and historical data chart for Turkey Liner Shipping Connectivity Index Maximum Value In 2004 100

  4. Monthly Global Max Temperature Projections 2040-2069

    • climatedataportal.metoffice.gov.uk
    Updated Aug 23, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Met Office (2022). Monthly Global Max Temperature Projections 2040-2069 [Dataset]. https://climatedataportal.metoffice.gov.uk/datasets/28d0a852eecd4173b68abab7900923ca
    Explore at:
    Dataset updated
    Aug 23, 2022
    Dataset authored and provided by
    Met Officehttp://www.metoffice.gov.uk/
    Area covered
    Description

    What does the data show?

    This data shows the monthly averages of maximum surface temperature (°C) for 2040-2069 using a combination of the CRU TS (v. 4.06) and UKCP18 global RCP2.6 datasets. The RCP2.6 scenario is an aggressive mitigation scenario where greenhouse gas emissions are strongly reduced.

    The data combines a baseline (1981-2010) value from CRU TS (v. 4.06) with an anomaly from UKCP18 global. Where the anomaly is the change in temperature at 2040-2069 relative to 1981-2010.

    The data is provided on the WGS84 grid which measures approximately 60km x 60km (latitude x longitude) at the equator.

    Limitations of the data

    We recommend the use of multiple grid cells or an average of grid cells around a point of interest to help users get a sense of the variability in the area. This will provide a more robust set of values for informing decisions based on the data.

    What are the naming conventions and how do I explore the data?

    This data contains a field for each month’s average over the period. They are named 'tmax' (temperature maximum), the month and ‘upper’ ‘median’ or ‘lower’. E.g. ‘tmax Mar Lower’ is the average of the daily minimum temperatures in March throughout 2040-2069, in the second lowest ensemble member.

    To understand how to explore the data, see this page: https://storymaps.arcgis.com/stories/457e7a2bc73e40b089fac0e47c63a578

    Please note, if viewing in ArcGIS Map Viewer, the map will default to ‘tmax Jan Median’ values.

    What do the ‘median’, ‘upper’, and ‘lower’ values mean?

    Climate models are numerical representations of the climate system. To capture uncertainty in projections for the future, an ensemble, or group, of climate models are run. Each ensemble member has slightly different starting conditions or model set-ups. Considering all of the model outcomes gives users a range of plausible conditions which could occur in the future.

    To select which ensemble members to use, the monthly averages of maximum surface temperature for the period 2040-2069 were calculated for each ensemble member and they were then ranked in order from lowest to highest for each location.

    The ‘lower’ fields are the second lowest ranked ensemble member. The ‘upper’ fields are the second highest ranked ensemble member. The ‘median’ field is the central value of the ensemble.

    This gives a median value, and a spread of the ensemble members indicating the range of possible outcomes in the projections. This spread of outputs can be used to infer the uncertainty in the projections. The larger the difference between the lower and upper fields, the greater the uncertainty.

    Data source

    CRU TS v. 4.06 - (downloaded 12/07/22)

    UKCP18 v.20200110 (downloaded 17/08/22)

    Useful links

    Further information on CRU TS Further information on the UK Climate Projections (UKCP) Further information on understanding climate data within the Met Office Climate Data Portal

  5. Number of companies in the travel agency sector in Italy 2018, by business...

    • statista.com
    Updated Jul 15, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2020). Number of companies in the travel agency sector in Italy 2018, by business size [Dataset]. https://www.statista.com/statistics/1010648/number-of-companies-in-travel-agencies-sector-sector-by-business-size-in-italy/
    Explore at:
    Dataset updated
    Jul 15, 2020
    Dataset authored and provided by
    Statistahttp://statista.com/
    Time period covered
    2018
    Area covered
    Italy
    Description

    What is the business size of Italian companies operating in the travel agency and tour operating sector? In 2018, this industry mainly consisted of small enterprises. As reported by Prometeia, approximately ** thousand companies operating in this sector in Italy had an annual production value of maximum *** million euros. In terms of market share, these small companies accounted for about ** percent of the market. On the contrary, only ** companies generated an annual production value exceeding ** million euros.

    Small companies, many employeesIn 2018, Italian small travel agencies and tour operators recorded the highest number of employees. Data reveal that companies with a production value not exceeding *** million euros counted in total over ** thousand members of staff. Thus, roughly ** percent of all the sector's employees worked for small businesses over the period considered.

    The Italian leaders of the sectorWhen looking at the average percentage change in the production value of selected large companies in this sector, data reveal that Alpitour S.p.A. experienced a significant growth between 2016 and 2018. Indeed, the Italian company averagely increased its production value by *** percent during the period considered.

  6. Summer Maximum Temperature Change - Projections (12km)

    • climatedataportal.metoffice.gov.uk
    Updated Jun 1, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Met Office (2023). Summer Maximum Temperature Change - Projections (12km) [Dataset]. https://climatedataportal.metoffice.gov.uk/datasets/summer-maximum-temperature-change-projections-12km
    Explore at:
    Dataset updated
    Jun 1, 2023
    Dataset authored and provided by
    Met Officehttp://www.metoffice.gov.uk/
    Area covered
    Description

    [Updated 28/01/25 to fix an issue in the ‘Lower’ values, which were not fully representing the range of uncertainty. ‘Median’ and ‘Higher’ values remain unchanged. The size of the change varies by grid cell and fixed period/global warming levels but the average difference between the 'lower' values before and after this update is 0.26°C.]What does the data show? This dataset shows the change in summer maximum air temperature for a range of global warming levels, including the recent past (2001-2020), compared to the 1981-2000 baseline period. Here, summer is defined as June-July-August. The dataset uses projections of daily maximum air temperature from UKCP18. For each year, the highest daily maximum temperature from the summer period is found. These are then averaged to give values for the 1981-2000 baseline, recent past (2001-2020) and global warming levels. The warming levels available are 1.5°C, 2.0°C, 2.5°C, 3.0°C and 4.0°C above the pre-industrial (1850-1900) period. The recent past value and global warming level values are stated as a change (in °C) relative to the 1981-2000 value. This enables users to compare summer maximum temperature trends for the different periods. In addition to the change values, values for the 1981-2000 baseline (corresponding to 0.51°C warming) and recent past (2001-2020, corresponding to 0.87°C warming) are also provided. This is summarised in the table below.PeriodDescription1981-2000 baselineAverage temperature (°C) for the period2001-2020 (recent past)Average temperature (°C) for the period2001-2020 (recent past) changeTemperature change (°C) relative to 1981-20001.5°C global warming level changeTemperature change (°C) relative to 1981-20002°C global warming level changeTemperature change (°C) relative to 1981-20002.5°C global warming level changeTemperature change (°C) relative to 1981-20003°C global warming level changeTemperature change (°C) relative to 1981-20004°C global warming level changeTemperature change (°C) relative to 1981-2000What is a global warming level?The Summer Maximum Temperature Change is calculated from the UKCP18 regional climate projections using the high emissions scenario (RCP 8.5) where greenhouse gas emissions continue to grow. Instead of considering future climate change during specific time periods (e.g. decades) for this scenario, the dataset is calculated at various levels of global warming relative to the pre-industrial (1850-1900) period. The world has already warmed by around 1.1°C (between 1850–1900 and 2011–2020), whilst this dataset allows for the exploration of greater levels of warming. The global warming levels available in this dataset are 1.5°C, 2°C, 2.5°C, 3°C and 4°C. The data at each warming level was calculated using a 21 year period. These 21 year periods are calculated by taking 10 years either side of the first year at which the global warming level is reached. This time will be different for different model ensemble members. To calculate the value for the Summer Maximum Temperature Change an average is taken across the 21 year period.We cannot provide a precise likelihood for particular emission scenarios being followed in the real world future. However, we do note that RCP8.5 corresponds to emissions considerably above those expected with current international policy agreements. The results are also expressed for several global warming levels because we do not yet know which level will be reached in the real climate as it will depend on future greenhouse emission choices and the sensitivity of the climate system, which is uncertain. Estimates based on the assumption of current international agreements on greenhouse gas emissions suggest a median warming level in the region of 2.4-2.8°C, but it could either be higher or lower than this level.What are the naming conventions and how do I explore the data?These data contain a field for each warming level and the 1981-2000 baseline. They are named 'tasmax summer change' (change in air 'temperature at surface'), the warming level or baseline, and 'upper' 'median' or 'lower' as per the description below. e.g. 'tasmax summer change 2.0 median' is the median value for summer for the 2.0°C warming level. Decimal points are included in field aliases but not in field names, e.g. 'tasmax summer change 2.0 median' is named 'tasmax_summer_change_20_median'. To understand how to explore the data, refer to the New Users ESRI Storymap. Please note, if viewing in ArcGIS Map Viewer, the map will default to ‘tasmax summer change 2.0°C median’ values.What do the 'median', 'upper', and 'lower' values mean?Climate models are numerical representations of the climate system. To capture uncertainty in projections for the future, an ensemble, or group, of climate models are run. Each ensemble member has slightly different starting conditions or model set-ups. Considering all of the model outcomes gives users a range of plausible conditions which could occur in the future.For this dataset, the model projections consist of 12 separate ensemble members. To select which ensemble members to use, the Summer Maximum Temperature Change was calculated for each ensemble member and they were then ranked in order from lowest to highest for each location.The ‘lower’ fields are the second lowest ranked ensemble member. The ‘higher’ fields are the second highest ranked ensemble member. The ‘median’ field is the central value of the ensemble.This gives a median value, and a spread of the ensemble members indicating the range of possible outcomes in the projections. This spread of outputs can be used to infer the uncertainty in the projections. The larger the difference between the lower and higher fields, the greater the uncertainty.‘Lower’, ‘median’ and ‘upper’ are also given for the baseline period as these values also come from the model that was used to produce the projections. This allows a fair comparison between the model projections and recent past. Useful linksFor further information on the UK Climate Projections (UKCP).Further information on understanding climate data within the Met Office Climate Data Portal.

  7. T

    Gold - Price Data

    • tradingeconomics.com
    • it.tradingeconomics.com
    • +13more
    csv, excel, json, xml
    Updated Dec 2, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    TRADING ECONOMICS (2025). Gold - Price Data [Dataset]. https://tradingeconomics.com/commodity/gold
    Explore at:
    excel, csv, json, xmlAvailable download formats
    Dataset updated
    Dec 2, 2025
    Dataset authored and provided by
    TRADING ECONOMICS
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jan 3, 1968 - Dec 2, 2025
    Area covered
    World
    Description

    Gold fell to 4,199.97 USD/t.oz on December 2, 2025, down 0.75% from the previous day. Over the past month, Gold's price has risen 4.93%, and is up 58.92% compared to the same time last year, according to trading on a contract for difference (CFD) that tracks the benchmark market for this commodity. Gold - values, historical data, forecasts and news - updated on December of 2025.

  8. w

    Hydrometric Historical Data (HYDAT) - Annual Maximum and Minimum Daily Water...

    • api.weather.gc.ca
    Updated Feb 17, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2021). Hydrometric Historical Data (HYDAT) - Annual Maximum and Minimum Daily Water Level or Flow [Dataset]. https://api.weather.gc.ca/collections/hydrometric-annual-statistics
    Explore at:
    html, application/geo+json, application/schema+json, jsonld, json, zipAvailable download formats
    Dataset updated
    Feb 17, 2021
    Area covered
    Description

    The annual maximum and minimum daily data are the maximum and minimum daily mean values for a given year.

  9. Historical Air Quality

    • kaggle.com
    zip
    Updated Feb 12, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    US Environmental Protection Agency (2019). Historical Air Quality [Dataset]. https://www.kaggle.com/datasets/epa/epa-historical-air-quality
    Explore at:
    zip(0 bytes)Available download formats
    Dataset updated
    Feb 12, 2019
    Dataset provided by
    United States Environmental Protection Agencyhttp://www.epa.gov/
    Authors
    US Environmental Protection Agency
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    The AQS Data Mart is a database containing all of the information from AQS. It has every measured value the EPA has collected via the national ambient air monitoring program. It also includes the associated aggregate values calculated by EPA (8-hour, daily, annual, etc.). The AQS Data Mart is a copy of AQS made once per week and made accessible to the public through web-based applications. The intended users of the Data Mart are air quality data analysts in the regulatory, academic, and health research communities. It is intended for those who need to download large volumes of detailed technical data stored at EPA and does not provide any interactive analytical tools. It serves as the back-end database for several Agency interactive tools that could not fully function without it: AirData, AirCompare, The Remote Sensing Information Gateway, the Map Monitoring Sites KML page, etc.

    AQS must maintain constant readiness to accept data and meet high data integrity requirements, thus is limited in the number of users and queries to which it can respond. The Data Mart, as a read only copy, can allow wider access.

    The most commonly requested aggregation levels of data (and key metrics in each) are:

    Sample Values (2.4 billion values back as far as 1957, national consistency begins in 1980, data for 500 substances routinely collected) The sample value converted to standard units of measure (generally 1-hour averages as reported to EPA, sometimes 24-hour averages) Local Standard Time (LST) and GMT timestamps Measurement method Measurement uncertainty, where known Any exceptional events affecting the data NAAQS Averages NAAQS average values (8-hour averages for ozone and CO, 24-hour averages for PM2.5) Daily Summary Values (each monitor has the following calculated each day) Observation count Observation per cent (of expected observations) Arithmetic mean of observations Max observation and time of max AQI (air quality index) where applicable Number of observations > Standard where applicable Annual Summary Values (each monitor has the following calculated each year) Observation count and per cent Valid days Required observation count Null observation count Exceptional values count Arithmetic Mean and Standard Deviation 1st - 4th maximum (highest) observations Percentiles (99, 98, 95, 90, 75, 50) Number of observations > Standard Site and Monitor Information FIPS State Code (the first 5 items on this list make up the AQS Monitor Identifier) FIPS County Code Site Number (unique within the county) Parameter Code (what is measured) POC (Parameter Occurrence Code) to distinguish from different samplers at the same site Latitude Longitude Measurement method information Owner / operator / data-submitter information Monitoring Network to which the monitor belongs Exemptions from regulatory requirements Operational dates City and CBSA where the monitor is located Quality Assurance Information Various data fields related to the 19 different QA assessments possible

    Querying BigQuery tables

    You can use the BigQuery Python client library to query tables in this dataset in Kernels. Note that methods available in Kernels are limited to querying data. Tables are at bigquery-public-data.epa_historical_air_quality.[TABLENAME]. Fork this kernel to get started.

    Acknowledgements

    Data provided by the US Environmental Protection Agency Air Quality System Data Mart.

  10. T

    Crude Oil - Price Data

    • tradingeconomics.com
    • ar.tradingeconomics.com
    • +14more
    csv, excel, json, xml
    Updated Dec 2, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    TRADING ECONOMICS (2025). Crude Oil - Price Data [Dataset]. https://tradingeconomics.com/commodity/crude-oil
    Explore at:
    csv, json, xml, excelAvailable download formats
    Dataset updated
    Dec 2, 2025
    Dataset authored and provided by
    TRADING ECONOMICS
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Mar 30, 1983 - Dec 2, 2025
    Area covered
    World
    Description

    Crude Oil fell to 59.17 USD/Bbl on December 2, 2025, down 0.25% from the previous day. Over the past month, Crude Oil's price has fallen 3.08%, and is down 15.40% compared to the same time last year, according to trading on a contract for difference (CFD) that tracks the benchmark market for this commodity. Crude Oil - values, historical data, forecasts and news - updated on December of 2025.

  11. a

    Annual Count of Hot Summer Days - Projections (12km)

    • hub.arcgis.com
    • climatedataportal.metoffice.gov.uk
    Updated Feb 7, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Met Office (2023). Annual Count of Hot Summer Days - Projections (12km) [Dataset]. https://hub.arcgis.com/datasets/1a89ff97e169482291ed49ff29ce1120
    Explore at:
    Dataset updated
    Feb 7, 2023
    Dataset authored and provided by
    Met Office
    Area covered
    Description

    [Updated 28/01/25 to fix an issue in the ‘Lower’ values, which were not fully representing the range of uncertainty. ‘Median’ and ‘Higher’ values remain unchanged. The size of the change varies by grid cell and fixed period/global warming levels but the average difference between the 'lower' values before and after this update is 0.2.]What does the data show? The Annual Count of Hot Summer Days is the number of days per year where the maximum daily temperature is above 30°C. It measures how many times the threshold is exceeded (not by how much) in a year. Note, the term ‘hot summer days’ is used to refer to the threshold and temperatures above 30°C outside the summer months also contribute to the annual count. The results should be interpreted as an approximation of the projected number of days when the threshold is exceeded as there will be many factors such as natural variability and local scale processes that the climate model is unable to represent.The Annual Count of Hot Summer Days is calculated for two baseline (historical) periods 1981-2000 (corresponding to 0.51°C warming) and 2001-2020 (corresponding to 0.87°C warming) and for global warming levels of 1.5°C, 2.0°C, 2.5°C, 3.0°C, 4.0°C above the pre-industrial (1850-1900) period. This enables users to compare the future number of hot summer days to previous values.What are the possible societal impacts?The Annual Count of Hot Summer Days indicates increased health risks, transport disruption and damage to infrastructure from high temperatures. It is based on exceeding a maximum daily temperature of 30°C. Impacts include:Increased heat related illnesses, hospital admissions or death.Transport disruption due to overheating of railway infrastructure. Overhead power lines also become less efficient. Other metrics such as the Annual Count of Summer Days (days above 25°C), Annual Count of Extreme Summer Days (days above 35°C) and the Annual Count of Tropical Nights (where the minimum temperature does not fall below 20°C) also indicate impacts from high temperatures, however they use different temperature thresholds.What is a global warming level?The Annual Count of Hot Summer Days is calculated from the UKCP18 regional climate projections using the high emissions scenario (RCP 8.5) where greenhouse gas emissions continue to grow. Instead of considering future climate change during specific time periods (e.g. decades) for this scenario, the dataset is calculated at various levels of global warming relative to the pre-industrial (1850-1900) period. The world has already warmed by around 1.1°C (between 1850–1900 and 2011–2020), whilst this dataset allows for the exploration of greater levels of warming. The global warming levels available in this dataset are 1.5°C, 2°C, 2.5°C, 3°C and 4°C. The data at each warming level was calculated using a 21 year period. These 21 year periods are calculated by taking 10 years either side of the first year at which the global warming level is reached. This time will be different for different model ensemble members. To calculate the value for the Annual Count of Hot Summer Days, an average is taken across the 21 year period. Therefore, the Annual Count of Hot Summer Days show the number of hot summer days that could occur each year, for each given level of warming. We cannot provide a precise likelihood for particular emission scenarios being followed in the real world future. However, we do note that RCP8.5 corresponds to emissions considerably above those expected with current international policy agreements. The results are also expressed for several global warming levels because we do not yet know which level will be reached in the real climate as it will depend on future greenhouse emission choices and the sensitivity of the climate system, which is uncertain. Estimates based on the assumption of current international agreements on greenhouse gas emissions suggest a median warming level in the region of 2.4-2.8°C, but it could either be higher or lower than this level.What are the naming conventions and how do I explore the data?This data contains a field for each global warming level and two baselines. They are named ‘HSD’ (where HSD means Hot Summer Days), the warming level or baseline, and ‘upper’ ‘median’ or ‘lower’ as per the description below. E.g. ‘Hot Summer Days 2.5 median’ is the median value for the 2.5°C warming level. Decimal points are included in field aliases but not field names e.g. ‘Hot Summer Days 2.5 median’ is ‘HotSummerDays_25_median’. To understand how to explore the data, see this page: https://storymaps.arcgis.com/stories/457e7a2bc73e40b089fac0e47c63a578Please note, if viewing in ArcGIS Map Viewer, the map will default to ‘HSD 2.0°C median’ values.What do the ‘median’, ‘upper’, and ‘lower’ values mean?Climate models are numerical representations of the climate system. To capture uncertainty in projections for the future, an ensemble, or group, of climate models are run. Each ensemble member has slightly different starting conditions or model set-ups. Considering all of the model outcomes gives users a range of plausible conditions which could occur in the future. For this dataset, the model projections consist of 12 separate ensemble members. To select which ensemble members to use, the Annual Count of Hot Summer Days was calculated for each ensemble member and they were then ranked in order from lowest to highest for each location. The ‘lower’ fields are the second lowest ranked ensemble member. The ‘upper’ fields are the second highest ranked ensemble member. The ‘median’ field is the central value of the ensemble.This gives a median value, and a spread of the ensemble members indicating the range of possible outcomes in the projections. This spread of outputs can be used to infer the uncertainty in the projections. The larger the difference between the lower and upper fields, the greater the uncertainty.‘Lower’, ‘median’ and ‘upper’ are also given for the baseline periods as these values also come from the model that was used to produce the projections. This allows a fair comparison between the model projections and recent past. Useful linksThis dataset was calculated following the methodology in the ‘Future Changes to high impact weather in the UK’ report and uses the same temperature thresholds as the 'State of the UK Climate' report.Further information on the UK Climate Projections (UKCP).Further information on understanding climate data within the Met Office Climate Data Portal.

  12. Global Surface Summary of the Day - GSOD

    • ncei.noaa.gov
    • datasets.ai
    • +4more
    csv
    Updated Apr 22, 2015
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    DOC/NOAA/NESDIS/NCDC > National Climatic Data Center, NESDIS, NOAA, U.S. Department of Commerce (2015). Global Surface Summary of the Day - GSOD [Dataset]. https://www.ncei.noaa.gov/access/metadata/landing-page/bin/iso?id=gov.noaa.ncdc:C00516
    Explore at:
    csvAvailable download formats
    Dataset updated
    Apr 22, 2015
    Dataset provided by
    National Oceanic and Atmospheric Administrationhttp://www.noaa.gov/
    National Centers for Environmental Informationhttps://www.ncei.noaa.gov/
    Authors
    DOC/NOAA/NESDIS/NCDC > National Climatic Data Center, NESDIS, NOAA, U.S. Department of Commerce
    Time period covered
    Jan 1, 1929 - Present
    Area covered
    Description

    Global Surface Summary of the Day is derived from The Integrated Surface Hourly (ISH) dataset. The ISH dataset includes global data obtained from the USAF Climatology Center, located in the Federal Climate Complex with NCDC. The latest daily summary data are normally available 1-2 days after the date-time of the observations used in the daily summaries. The online data files begin with 1929 and are at the time of this writing at the Version 8 software level. Over 9000 stations' data are typically available. The daily elements included in the dataset (as available from each station) are: Mean temperature (.1 Fahrenheit) Mean dew point (.1 Fahrenheit) Mean sea level pressure (.1 mb) Mean station pressure (.1 mb) Mean visibility (.1 miles) Mean wind speed (.1 knots) Maximum sustained wind speed (.1 knots) Maximum wind gust (.1 knots) Maximum temperature (.1 Fahrenheit) Minimum temperature (.1 Fahrenheit) Precipitation amount (.01 inches) Snow depth (.1 inches) Indicator for occurrence of: Fog, Rain or Drizzle, Snow or Ice Pellets, Hail, Thunder, Tornado/Funnel Cloud Global summary of day data for 18 surface meteorological elements are derived from the synoptic/hourly observations contained in USAF DATSAV3 Surface data and Federal Climate Complex Integrated Surface Hourly (ISH). Historical data are generally available for 1929 to the present, with data from 1973 to the present being the most complete. For some periods, one or more countries' data may not be available due to data restrictions or communications problems. In deriving the summary of day data, a minimum of 4 observations for the day must be present (allows for stations which report 4 synoptic observations/day). Since the data are converted to constant units (e.g, knots), slight rounding error from the originally reported values may occur (e.g, 9.9 instead of 10.0). The mean daily values described below are based on the hours of operation for the station. For some stations/countries, the visibility will sometimes 'cluster' around a value (such as 10 miles) due to the practice of not reporting visibilities greater than certain distances. The daily extremes and totals--maximum wind gust, precipitation amount, and snow depth--will only appear if the station reports the data sufficiently to provide a valid value. Therefore, these three elements will appear less frequently than other values. Also, these elements are derived from the stations' reports during the day, and may comprise a 24-hour period which includes a portion of the previous day. The data are reported and summarized based on Greenwich Mean Time (GMT, 0000Z - 2359Z) since the original synoptic/hourly data are reported and based on GMT.

  13. a

    Annual Count of Icing Days - Projections (12km)

    • climate-themetoffice.hub.arcgis.com
    • climatedataportal.metoffice.gov.uk
    Updated Feb 7, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Met Office (2023). Annual Count of Icing Days - Projections (12km) [Dataset]. https://climate-themetoffice.hub.arcgis.com/datasets/TheMetOffice::annual-count-of-icing-days-projections-12km/explore
    Explore at:
    Dataset updated
    Feb 7, 2023
    Dataset authored and provided by
    Met Office
    Area covered
    Description

    [Updated 28/01/25 to fix an issue in the ‘Lower’ values, which were not fully representing the range of uncertainty. ‘Median’ and ‘Higher’ values remain unchanged. The size of the change varies by grid cell and fixed period/global warming levels but the average difference between the 'lower' values before and after this update is 0.1.]What does the data show? The Annual Count of Icing Days is the number of days per year where the maximum daily temperature is below 0°C. Note the Annual Count of Icing Days is more severe than the Annual Count of Frost Days as icing days refer to the daily maximum temperature whereas the frost days refer to the daily minimum temperature. The Annual Count of Icing Days measures how many times the threshold is exceeded (not by how much) in a year. The results should be interpreted as an approximation of the projected number of days when the threshold is exceeded as there will be many factors such as natural variability and local scale processes that the climate model is unable to represent.The Annual Count of Icing Days is calculated for two baseline (historical) periods 1981-2000 (corresponding to 0.51°C warming) and 2001-2020 (corresponding to 0.87°C warming) and for global warming levels of 1.5°C, 2.0°C, 2.5°C, 3.0°C, 4.0°C above the pre-industrial (1850-1900) period. This enables users to compare the future number of icing days to previous values. What are the possible societal impacts?The Annual Count of Icing Days indicates increased cold weather disruption due to a higher than normal chance of ice and snow. It is based on the maximum daily temperature being below 0°C, the temperature does not rise above 0°C for the entire day. Impacts include:Damage to crops.Transport disruption.Increased energy demand.The Annual Count of Frost Days, is a similar metric measuring impacts from cold temperatures, it indicates less severe cold weather impacts.What is a global warming level?The Annual Count of Icing Days is calculated from the UKCP18 regional climate projections using the high emissions scenario (RCP 8.5) where greenhouse gas emissions continue to grow. Instead of considering future climate change during specific time periods (e.g. decades) for this scenario, the dataset is calculated at various levels of global warming relative to the pre-industrial (1850-1900) period. The world has already warmed by around 1.1°C (between 1850–1900 and 2011–2020), whilst this dataset allows for the exploration of greater levels of warming. The global warming levels available in this dataset are 1.5°C, 2°C, 2.5°C, 3°C and 4°C. The data at each warming level was calculated using a 21 year period. These 21 year periods are calculated by taking 10 years either side of the first year at which the global warming level is reached. This time will be different for different model ensemble members. To calculate the value for the Annual Count of Icing Days, an average is taken across the 21 year period. Therefore, the Annual Count of Icing Days show the number of icing days that could occur each year, for each given level of warming. We cannot provide a precise likelihood for particular emission scenarios being followed in the real world future. However, we do note that RCP8.5 corresponds to emissions considerably above those expected with current international policy agreements. The results are also expressed for several global warming levels because we do not yet know which level will be reached in the real climate as it will depend on future greenhouse emission choices and the sensitivity of the climate system, which is uncertain. Estimates based on the assumption of current international agreements on greenhouse gas emissions suggest a median warming level in the region of 2.4-2.8°C, but it could either be higher or lower than this level.What are the naming conventions and how do I explore the data?This data contains a field for each global warming level and two baselines. They are named ‘Icing Days’, the warming level or baseline, and ‘upper’ ‘median’ or ‘lower’ as per the description below. E.g. ‘Icing Days 2.5 median’ is the median value for the 2.5°C warming level. Decimal points are included in field aliases but not field names e.g. ‘Icing Days 2.5 median’ is ‘IcingDays_25_median’. To understand how to explore the data, see this page: https://storymaps.arcgis.com/stories/457e7a2bc73e40b089fac0e47c63a578Please note, if viewing in ArcGIS Map Viewer, the map will default to ‘Icing Days 2.0°C median’ values.What do the ‘median’, ‘upper’, and ‘lower’ values mean?Climate models are numerical representations of the climate system. To capture uncertainty in projections for the future, an ensemble, or group, of climate models are run. Each ensemble member has slightly different starting conditions or model set-ups. Considering all of the model outcomes gives users a range of plausible conditions which could occur in the future. For this dataset, the model projections consist of 12 separate ensemble members. To select which ensemble members to use, the Annual Count of Icing Days was calculated for each ensemble member and they were then ranked in order from lowest to highest for each location. The ‘lower’ fields are the second lowest ranked ensemble member. The ‘upper’ fields are the second highest ranked ensemble member. The ‘median’ field is the central value of the ensemble.This gives a median value, and a spread of the ensemble members indicating the range of possible outcomes in the projections. This spread of outputs can be used to infer the uncertainty in the projections. The larger the difference between the lower and upper fields, the greater the uncertainty.‘Lower’, ‘median’ and ‘upper’ are also given for the baseline periods as these values also come from the model that was used to produce the projections. This allows a fair comparison between the model projections and recent past. Useful linksThis dataset was calculated following the methodology in the ‘Future Changes to high impact weather in the UK’ report and uses the same temperature thresholds as the 'State of the UK Climate' report.Further information on the UK Climate Projections (UKCP).Further information on understanding climate data within the Met Office Climate Data Portal.

  14. Dataset: A Systematic Literature Review on the topic of High-value datasets

    • zenodo.org
    • data.niaid.nih.gov
    bin, png, txt
    Updated Jul 11, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anastasija Nikiforova; Anastasija Nikiforova; Nina Rizun; Nina Rizun; Magdalena Ciesielska; Magdalena Ciesielska; Charalampos Alexopoulos; Charalampos Alexopoulos; Andrea Miletič; Andrea Miletič (2024). Dataset: A Systematic Literature Review on the topic of High-value datasets [Dataset]. http://doi.org/10.5281/zenodo.8075918
    Explore at:
    png, bin, txtAvailable download formats
    Dataset updated
    Jul 11, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Anastasija Nikiforova; Anastasija Nikiforova; Nina Rizun; Nina Rizun; Magdalena Ciesielska; Magdalena Ciesielska; Charalampos Alexopoulos; Charalampos Alexopoulos; Andrea Miletič; Andrea Miletič
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains data collected during a study ("Towards High-Value Datasets determination for data-driven development: a systematic literature review") conducted by Anastasija Nikiforova (University of Tartu), Nina Rizun, Magdalena Ciesielska (Gdańsk University of Technology), Charalampos Alexopoulos (University of the Aegean) and Andrea Miletič (University of Zagreb)
    It being made public both to act as supplementary data for "Towards High-Value Datasets determination for data-driven development: a systematic literature review" paper (pre-print is available in Open Access here -> https://arxiv.org/abs/2305.10234) and in order for other researchers to use these data in their own work.


    The protocol is intended for the Systematic Literature review on the topic of High-value Datasets with the aim to gather information on how the topic of High-value datasets (HVD) and their determination has been reflected in the literature over the years and what has been found by these studies to date, incl. the indicators used in them, involved stakeholders, data-related aspects, and frameworks. The data in this dataset were collected in the result of the SLR over Scopus, Web of Science, and Digital Government Research library (DGRL) in 2023.

    ***Methodology***

    To understand how HVD determination has been reflected in the literature over the years and what has been found by these studies to date, all relevant literature covering this topic has been studied. To this end, the SLR was carried out to by searching digital libraries covered by Scopus, Web of Science (WoS), Digital Government Research library (DGRL).

    These databases were queried for keywords ("open data" OR "open government data") AND ("high-value data*" OR "high value data*"), which were applied to the article title, keywords, and abstract to limit the number of papers to those, where these objects were primary research objects rather than mentioned in the body, e.g., as a future work. After deduplication, 11 articles were found unique and were further checked for relevance. As a result, a total of 9 articles were further examined. Each study was independently examined by at least two authors.

    To attain the objective of our study, we developed the protocol, where the information on each selected study was collected in four categories: (1) descriptive information, (2) approach- and research design- related information, (3) quality-related information, (4) HVD determination-related information.

    ***Test procedure***
    Each study was independently examined by at least two authors, where after the in-depth examination of the full-text of the article, the structured protocol has been filled for each study.
    The structure of the survey is available in the supplementary file available (see Protocol_HVD_SLR.odt, Protocol_HVD_SLR.docx)
    The data collected for each study by two researchers were then synthesized in one final version by the third researcher.

    ***Description of the data in this data set***

    Protocol_HVD_SLR provides the structure of the protocol
    Spreadsheets #1 provides the filled protocol for relevant studies.
    Spreadsheet#2 provides the list of results after the search over three indexing databases, i.e. before filtering out irrelevant studies

    The information on each selected study was collected in four categories:
    (1) descriptive information,
    (2) approach- and research design- related information,
    (3) quality-related information,
    (4) HVD determination-related information

    Descriptive information
    1) Article number - a study number, corresponding to the study number assigned in an Excel worksheet
    2) Complete reference - the complete source information to refer to the study
    3) Year of publication - the year in which the study was published
    4) Journal article / conference paper / book chapter - the type of the paper -{journal article, conference paper, book chapter}
    5) DOI / Website- a link to the website where the study can be found
    6) Number of citations - the number of citations of the article in Google Scholar, Scopus, Web of Science
    7) Availability in OA - availability of an article in the Open Access
    8) Keywords - keywords of the paper as indicated by the authors
    9) Relevance for this study - what is the relevance level of the article for this study? {high / medium / low}

    Approach- and research design-related information
    10) Objective / RQ - the research objective / aim, established research questions
    11) Research method (including unit of analysis) - the methods used to collect data, including the unit of analy-sis (country, organisation, specific unit that has been ana-lysed, e.g., the number of use-cases, scope of the SLR etc.)
    12) Contributions - the contributions of the study
    13) Method - whether the study uses a qualitative, quantitative, or mixed methods approach?
    14) Availability of the underlying research data- whether there is a reference to the publicly available underly-ing research data e.g., transcriptions of interviews, collected data, or explanation why these data are not shared?
    15) Period under investigation - period (or moment) in which the study was conducted
    16) Use of theory / theoretical concepts / approaches - does the study mention any theory / theoretical concepts / approaches? If any theory is mentioned, how is theory used in the study?

    Quality- and relevance- related information
    17) Quality concerns - whether there are any quality concerns (e.g., limited infor-mation about the research methods used)?
    18) Primary research object - is the HVD a primary research object in the study? (primary - the paper is focused around the HVD determination, sec-ondary - mentioned but not studied (e.g., as part of discus-sion, future work etc.))

    HVD determination-related information
    19) HVD definition and type of value - how is the HVD defined in the article and / or any other equivalent term?
    20) HVD indicators - what are the indicators to identify HVD? How were they identified? (components & relationships, “input -> output")
    21) A framework for HVD determination - is there a framework presented for HVD identification? What components does it consist of and what are the rela-tionships between these components? (detailed description)
    22) Stakeholders and their roles - what stakeholders or actors does HVD determination in-volve? What are their roles?
    23) Data - what data do HVD cover?
    24) Level (if relevant) - what is the level of the HVD determination covered in the article? (e.g., city, regional, national, international)


    ***Format of the file***
    .xls, .csv (for the first spreadsheet only), .odt, .docx

    ***Licenses or restrictions***
    CC-BY

    For more info, see README.txt

  15. T

    United States Federal Corporate Tax Rate

    • tradingeconomics.com
    • fr.tradingeconomics.com
    • +13more
    csv, excel, json, xml
    Updated May 26, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    TRADING ECONOMICS (2017). United States Federal Corporate Tax Rate [Dataset]. https://tradingeconomics.com/united-states/corporate-tax-rate
    Explore at:
    xml, csv, json, excelAvailable download formats
    Dataset updated
    May 26, 2017
    Dataset authored and provided by
    TRADING ECONOMICS
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Dec 31, 1909 - Dec 31, 2025
    Area covered
    United States
    Description

    The Corporate Tax Rate in the United States stands at 21 percent. This dataset provides - United States Corporate Tax Rate - actual values, historical data, forecast, chart, statistics, economic calendar and news.

  16. T

    Platinum - Price Data

    • tradingeconomics.com
    • ar.tradingeconomics.com
    • +13more
    csv, excel, json, xml
    Updated Dec 2, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    TRADING ECONOMICS (2025). Platinum - Price Data [Dataset]. https://tradingeconomics.com/commodity/platinum
    Explore at:
    xml, json, csv, excelAvailable download formats
    Dataset updated
    Dec 2, 2025
    Dataset authored and provided by
    TRADING ECONOMICS
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Mar 1, 1968 - Dec 2, 2025
    Area covered
    World
    Description

    Platinum fell to 1,646.20 USD/t.oz on December 2, 2025, down 0.99% from the previous day. Over the past month, Platinum's price has risen 5.18%, and is up 73.12% compared to the same time last year, according to trading on a contract for difference (CFD) that tracks the benchmark market for this commodity. Platinum - values, historical data, forecasts and news - updated on December of 2025.

  17. r

    Daily and monthly minimum, maximum and range of eReefs hydrodynamic model...

    • researchdata.edu.au
    Updated Oct 27, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lafond,Gael; Hammerton,Marc; Smith, Aaron; Lawrey, Eric (2020). Daily and monthly minimum, maximum and range of eReefs hydrodynamic model outputs - temperature, water elevation (AIMS, Source: CSIRO) [Dataset]. https://researchdata.edu.au/ereefs-aims-csiro-model-outputs/3766488
    Explore at:
    Dataset updated
    Oct 27, 2020
    Dataset provided by
    Australian Ocean Data Network
    Authors
    Lafond,Gael; Hammerton,Marc; Smith, Aaron; Lawrey, Eric
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Dec 1, 2010 - Nov 30, 2022
    Area covered
    Description

    This derived dataset contains basic statistical products derived from the eReefs CSIRO hydrodynamic model v2.0 outputs at both 1 km and 4 km resolution and v4.0 at 4 km for both a daily and monthly aggregation period. The statistics generated are daily minimum, maximum, mean and range. For monthly aggregations there are monthly mean of the daily minimum, maximum and range, and the monthly minimum, maximum and range. The dataset only calculates statistics for the temperature and water elevation (eta).

    These are generated by the AIMS eReefs Platform (https://ereefs.aims.gov.au/). These statistical products are derived from the original hourly model outputs available via the National Computing Infrastructure (NCI) (https://thredds.nci.org.au/thredds/catalogs/fx3/catalog.html).

    The data is re-gridded from the original curvilinear grid used by the eReefs model into a regular grid so the data files can be easily loaded into standard GIS software. These products are made available via a THREDDS server (https://thredds.ereefs.aims.gov.au/thredds/) in NetCDF format and
    This data set contains two (2) products, based on the periods over which the statistics are determined: daily, and monthly.

    Method:
    Data files are processed in two stages. The daily files are calculated from the original hourly files, then the monthly files are calculated from the daily files. See Technical Guide to Derived Products from CSIRO eReefs Models for details on the regridding process.

    Data Dictionary:

    Daily statistics:
    The following variables can be found in the Daily statistics product:

    - temp_mean: mean temperature for each grid cell for the day.
    - temp_min: minimum temperature for each grid cell for the day.
    - temp_max: maximum temperature for each grid cell for the day.
    - temp_range: difference between maximum and minimum temperatures for each grid cell for the day.

    - eta_mean: mean surface elevation for each grid cell for the day.
    - eta_min: minimum surface elevation for each grid cell for the day.
    - eta_max: maximum surface elevation for each grid cell for the day.
    - eta_range: difference between maximum and minimum surface elevation for each grid cell for the day.

    Depths:

    Depths at 1km resolution: -2.35m, -5.35m, -18.0m, -49.0m
    Depths are 4km resolution: -1.5m, -5.55m, -17.75m, -49.0m

    * Monthly statistics:

    The following variables can be found in the Monthly statistics product:

    - temp_min_min: the minimum value of the "temp_min" variable from the Daily statistics product. This equates to the minimum temperature for each grid cell for the corresponding month.
    - temp_min_mean: the mean value of the "temp_min" variable from the Daily statistics product. This equates to the mean minimum temperature for each grid cell for the corresponding month.
    - temp_max_max: the maximum value of the "temp_max" variable from the Daily statistics product. This equates to the maximum temperature for each grid cell for the corresponding month.
    - temp_max_mean: the mean value of the "temp_max" variable from the Daily statistics product. This equates to the mean maximum temperature for each grid cell for the corresponding month.
    - temp_mean: the mean value of the "temp_mean" variable from the Daily statistics product. This equates to the mean temperature for each grid cell for the corresponding month.
    - temp_range_mean: the mean value of the "temp_range" variable from the Daily statistics product. This equates to the mean range of temperatures for each grid cell for the corresponding month.
    - eta_min_min: the minimum value of the "eta_min" variable from the Daily statistics product. This equates to the minimum surface elevation for each grid cell for the corresponding month.
    - eta_min_mean: the mean value of the "eta_min" variable from the Daily statistics product. This equates to the mean minimum surface elevation for each grid cell for the corresponding month.
    - eta_max_max: the maximum value of the "eta_max" variable from the Daily statistics product. This equates to the maximum surface elevation for each grid cell for the corresponding month.
    - eta_max_mean: the mean value of the "eta_max" variable from the Daily statistics product. This equates to the mean maximum surface elevation for each grid cell for the corresponding month.
    - eta_mean: the mean value of the "eta_mean" variable from the Daily statistics product. This equates to the mean surface elevation for each grid cell for the corresponding month.
    - eta_range_mean: the mean value of the "eta_range" variable from the Daily statistics product. This equates to the mean range of surface elevations for each grid cell for the corresponding month.

    Depths:
    Depths at 1km resolution: -2.35m, -5.35m, -18.0m, -49.0m
    Depths are 4km resolution: -1.5m, -5.55m, -17.75m, -49.0m

    What does this dataset show:

    The temperature statistics show that inshore areas along the coast get significantly warmer in summer and cooler in winter than offshore areas. The daily temperature range is lower in winter with most areas experiencing 0.2 - 0.3 degrees Celsius temperature change. In summer months the daily temperature range approximately doubles, with up welling areas in the Capricorn Bunker group, off the outer edge of the Prompey sector of reefs and on the east side of Torres Strait seeing daily temperature ranges between 0.7 - 1.2 degree Celsius.

    Limitations:

    This dataset is based on spatial and temporal models and so are an estimate of the environmental conditions. It is not based on in-water measurements, and thus will have a spatially varying level of error in the modelled values. It is important to consider if the model results are fit for the intended purpose.

    Change Log:
    2025-10-29: Updated the metadata title from 'eReefs AIMS-CSIRO Statistics of hydrodynamic model outputs' to 'Daily and monthly minimum, maximum and range of eReefs hydrodynamic model outputs - temperature, water elevation (AIMS, Source: CSIRO)'. Improve the introduction text. Corrected deprecated link to NCI THREDDS. Added a description of what the dataset shows.

  18. d

    Maximum and Minimum System Load (MW)

    • data.gov.qa
    • qatar.opendatasoft.com
    csv, excel, json
    Updated Jun 11, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Maximum and Minimum System Load (MW) [Dataset]. https://www.data.gov.qa/explore/dataset/maximum-and-minimum-system-load-mw/
    Explore at:
    csv, excel, jsonAvailable download formats
    Dataset updated
    Jun 11, 2025
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset records the maximum and minimum electrical system loads in megawatts (MW) observed annually in Qatar. Each entry includes the highest and lowest demand values along with the corresponding dates. The data helps monitor national electricity consumption trends, peak demand periods, and off-peak usage, supporting planning and operational efficiency.

  19. k

    SOL All-Time High Data

    • kraken.com
    Updated Sep 25, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kraken (2023). SOL All-Time High Data [Dataset]. https://www.kraken.com/prices/solana
    Explore at:
    Dataset updated
    Sep 25, 2023
    Dataset authored and provided by
    Kraken
    Description

    All-time high price data for Solana, including the peak value, date achieved, and current comparison metrics.

  20. US Average, Maximum, and Minimum Temperatures

    • kaggle.com
    zip
    Updated Jan 18, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Devastator (2023). US Average, Maximum, and Minimum Temperatures [Dataset]. https://www.kaggle.com/datasets/thedevastator/2015-us-average-maximum-and-minimum-temperatures
    Explore at:
    zip(9429155 bytes)Available download formats
    Dataset updated
    Jan 18, 2023
    Authors
    The Devastator
    Area covered
    United States
    Description

    US Average, Maximum, and Minimum Temperatures

    Analyzing Daily Temperatures Across the USA

    By Matthew Winter [source]

    About this dataset

    This dataset features the daily temperature summaries from various weather stations across the United States. It includes information such as location, average temperature, maximum temperature, minimum temperature, state name, state code, and zip code. All the data contained in this dataset has been filtered so that any values equaling -999 were removed. With this powerful set of data you to explore how climate conditions changed throughout the year and how they varied across different regions of the country. Dive into your own research today to uncover fascinating climate trends or use it to further narrow your studies specific to a region or city

    More Datasets

    For more datasets, click here.

    Featured Notebooks

    • 🚨 Your notebook can be here! 🚨!

    How to use the dataset

    This dataset offers a detailed look at daily average, minimum, and maximum temperatures across the United States. It contains information from 1120 weather stations throughout the year to provide a comprehensive look at temperature trends for the year.

    The data contains a variety of columns including station, station name, location (latitude and longitude), state name zip code and date. The primary focus of this dataset is on the AvgTemp, MaxTemp and MinTemp columns which provide daily average, maximum and minimum temperature records respectively in degrees Fahrenheit.

    To use this dataset effectively it is useful to consider multiple views before undertaking any analysis or making conclusions:
    - Plot each individual record versus time by creating a line graph with stations as labels on different lines indicating changes over time. Doing so can help identify outliers that may need further examination; much like viewing data on a scatterplot looking for confidence bands or examining variance between points that are otherwise hard to see when all points are plotted on one graph only.
    - A comparison of states can be made through creating grouped bar charts where states are grouped together with Avg/Max/Min temperatures included within each chart - thereby showing any variance that may exist between states during a specific period about which it's possible to make observations about themselves (rather than comparing them). For example - you could observe if there was an abnormally high temperature increase in California during July compared with other US states since all measurements would be represented visually providing opportunity for insights quickly compared with having to manually calculate figures from raw data sets only.

    With these two initial approaches there will also be further visualizations possible regarding correlations between particular geographical areas versus different climatic conditions or through population analysis such as correlating areas warmer/colder than median observances verses relative population densities etc.. providing additional opportunities for investigation particularly when combined with key metrics collected over multiple years versus one single year's results exclusively allowing wider inferences to be made depending upon what is being requested in terms of outcomes desired from those who may explore this data set further down the line beyond its original compilation starter point here today!

    Research Ideas

    • Using the Latitude and Longitude values, this dataset can be used to create a map of average temperatures across the USA. This would be useful for seeing which areas were consistently hotter or colder than others throughout the year.
    • Using the AvgTemp and StateName columns, predictors could use regression modeling to predict what temperature an area will have in a given month based on it's average temperature.
    • By using the Date column and plotting it alongside MaxTemp or MinTemp values, visualization methods such as timelines could be utilized to show how temperatures changed during different times of year across various states in the US

    Acknowledgements

    If you use this dataset in your research, please credit the original authors. Data Source

    License

    Unknown License - Please check the dataset description for more information.

    Columns

    File: 2015 USA Weather Data FINAL.csv

    Acknowledgements

    If you use this dataset in your research, please credit the original authors. If you use this dataset in your research, please credit Matthew Winter.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Data Science Lovers (2025). Weather Data [Dataset]. https://www.kaggle.com/datasets/rohitgrewal/weather-data/data
Organization logo

Weather Data

Analyse Weather Dataset with Python

Explore at:
zip(102960 bytes)Available download formats
Dataset updated
Jul 30, 2025
Authors
Data Science Lovers
License

http://opendatacommons.org/licenses/dbcl/1.0/http://opendatacommons.org/licenses/dbcl/1.0/

Description

📹Project Video available on YouTube - https://youtu.be/4hYOkHijtNw

🖇️Connect with me on LinkedIn - https://www.linkedin.com/in/rohit-grewal

The Weather Dataset is a time-series data set with per-hour information about the weather conditions at a particular location. It records Temperature, Dew Point Temperature, Relative Humidity, Wind Speed, Visibility, Pressure, and Conditions.

This data is available as a CSV file. We have analysed this data using the Pandas library.

Using this dataset, we answered multiple questions with Python in our Project.

Q. 1) Find all the unique 'Wind Speed' values in the data.

Q. 2) Find the number of times when the 'Weather is exactly Clear'.

Q. 3) Find the number of times when the 'Wind Speed was exactly 4 km/h'.

Q. 4) Find out all the Null Values in the data.

Q. 5) Rename the column name 'Weather' of the dataframe to 'Weather Condition'.

Q. 6) What is the mean 'Visibility' ?

Q. 7) What is the Standard Deviation of 'Pressure' in this data?

Q. 8) What is the Variance of 'Relative Humidity' in this data ?

Q. 9) Find all instances when 'Snow' was recorded.

Q. 10) Find all instances when 'Wind Speed is above 24' and 'Visibility is 25'.

Q. 11) What is the Mean value of each column against each 'Weather Condition ?

Q. 12) What is the Minimum & Maximum value of each column against each 'Weather Condition ?

Q. 13) Show all the Records where Weather Condition is Fog.

Q. 14) Find all instances when 'Weather is Clear' or 'Visibility is above 40'.

Q. 15) Find all instances when : A. 'Weather is Clear' and 'Relative Humidity is greater than 50' or B. 'Visibility is above 40'

These are the main Features/Columns available in the dataset :

  • Date/Time - The timestamp when the weather observation was recorded. Format: M/D/YYYY H:MM.

  • Temp_C - The air temperature in degrees Celsius at the time of observation.

  • Dew Point Temp_C - The temperature at which air becomes saturated with moisture (dew point), also measured in degrees Celsius.

  • Rel Hum_% - The relative humidity, expressed as a percentage (%), indicating how much moisture is in the air compared to the maximum it could hold at that temperature.

  • Wind Speed_km/h - The speed of the wind at the time of observation, measured in kilometers per hour.

  • Visibility_km - The distance one can clearly see, measured in kilometers. Lower values often indicate fog or precipitation.

  • Press_kPa - Atmospheric pressure at the time of observation, measured in kilopascals (kPa).

  • Weather - A text description of the observed weather conditions, such as "Fog", "Rain", or "Snow".

Search
Clear search
Close search
Google apps
Main menu