53 datasets found
  1. Deaths, by month

    • www150.statcan.gc.ca
    • gimi9.com
    • +3more
    Updated Feb 19, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Government of Canada, Statistics Canada (2025). Deaths, by month [Dataset]. http://doi.org/10.25318/1310070801-eng
    Explore at:
    Dataset updated
    Feb 19, 2025
    Dataset provided by
    Government of Canadahttp://www.gg.ca/
    Statistics Canadahttps://statcan.gc.ca/en
    Area covered
    Canada
    Description

    Number and percentage of deaths, by month and place of residence, 1991 to most recent year.

  2. Deaths registered weekly in England and Wales, provisional

    • ons.gov.uk
    • cy.ons.gov.uk
    xlsx
    Updated Jun 25, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Office for National Statistics (2025). Deaths registered weekly in England and Wales, provisional [Dataset]. https://www.ons.gov.uk/peoplepopulationandcommunity/birthsdeathsandmarriages/deaths/datasets/weeklyprovisionalfiguresondeathsregisteredinenglandandwales
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Jun 25, 2025
    Dataset provided by
    Office for National Statisticshttp://www.ons.gov.uk/
    License

    Open Government Licence 3.0http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/
    License information was derived automatically

    Description

    Provisional counts of the number of deaths registered in England and Wales, by age, sex, region and Index of Multiple Deprivation (IMD), in the latest weeks for which data are available.

  3. #IndiaNeedsOxygen Tweets

    • kaggle.com
    • opendatabay.com
    zip
    Updated Nov 14, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kash (2021). #IndiaNeedsOxygen Tweets [Dataset]. https://www.kaggle.com/kaushiksuresh147/indianeedsoxygen-tweets
    Explore at:
    zip(4441094 bytes)Available download formats
    Dataset updated
    Nov 14, 2021
    Authors
    Kash
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    India marks one COVID-19 death every 5 minutes

    https://ichef.bbci.co.uk/news/976/cpsprodpb/11C98/production/_118165827_gettyimages-1232465340.jpg" alt="">

    Content

    People across India scrambled for life-saving oxygen supplies on Friday and patients lay dying outside hospitals as the capital recorded the equivalent of one death from COVID-19 every five minutes.

    For the second day running, the country’s overnight infection total was higher than ever recorded anywhere in the world since the pandemic began last year, at 332,730.

    India’s second wave has hit with such ferocity that hospitals are running out of oxygen, beds, and anti-viral drugs. Many patients have been turned away because there was no space for them, doctors in Delhi said.

    https://s.yimg.com/ny/api/res/1.2/XhVWo4SOloJoXaQLrxxUIQ--/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MA--/https://s.yimg.com/os/creatr-uploaded-images/2021-04/8aa568f0-a3e0-11eb-8ff6-6b9a188e374a" alt="">

    Mass cremations have been taking place as the crematoriums have run out of space. Ambulance sirens sounded throughout the day in the deserted streets of the capital, one of India’s worst-hit cities, where a lockdown is in place to try and stem the transmission of the virus. source

    Dataset

    The dataset consists of the tweets made with the #IndiaWantsOxygen hashtag covering the tweets from the past week. The dataset totally consists of 25,440 tweets and will be updated on a daily basis.

    The description of the features is given below | No |Columns | Descriptions | | -- | -- | -- | | 1 | user_name | The name of the user, as they’ve defined it. | | 2 | user_location | The user-defined location for this account’s profile. | | 3 | user_description | The user-defined UTF-8 string describing their account. | | 4 | user_created | Time and date, when the account was created. | | 5 | user_followers | The number of followers an account currently has. | | 6 | user_friends | The number of friends an account currently has. | | 7 | user_favourites | The number of favorites an account currently has | | 8 | user_verified | When true, indicates that the user has a verified account | | 9 | date | UTC time and date when the Tweet was created | | 10 | text | The actual UTF-8 text of the Tweet | | 11 | hashtags | All the other hashtags posted in the tweet along with #IndiaWantsOxygen | | 12 | source | Utility used to post the Tweet, Tweets from the Twitter website have a source value - web | | 13 | is_retweet | Indicates whether this Tweet has been Retweeted by the authenticating user. |

    Acknowledgements

    https://globalnews.ca/news/7785122/india-covid-19-hospitals-record/ Image courtesy: BBC and Reuters

    Inspiration

    The past few days have been really depressing after seeing these incidents. These tweets are the voice of the indians requesting help and people all over the globe asking their own countries to support India by providing oxygen tanks.

    And I strongly believe that this is not just some data, but the pure emotions of people and their call for help. And I hope we as data scientists could contribute on this front by providing valuable information and insights.

  4. Z

    Data for: World's human migration patterns in 2000-2019 unveiled by...

    • data.niaid.nih.gov
    Updated Jul 11, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Muttarak, Raya (2024). Data for: World's human migration patterns in 2000-2019 unveiled by high-resolution data [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7997133
    Explore at:
    Dataset updated
    Jul 11, 2024
    Dataset provided by
    Varis, Olli
    Kummu, Matti
    Kinnunen, Pekka
    Abel, Guy J
    Heino, Matias
    Horton, Alexander
    Niva, Venla
    Muttarak, Raya
    Virkki, Vili
    Kallio, Marko
    Taka, Maija
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    World
    Description

    This dataset provides a global gridded (5 arc-min resolution) detailed annual net-migration dataset for 2000-2019. We also provide global annual birth and death rate datasets – that were used to estimate the net-migration – for same years. The dataset is presented in details, with some further analyses, in the following publication. Please cite this paper when using data.

    Niva et al. 2023. World's human migration patterns in 2000-2019 unveiled by high-resolution data. Nature Human Behaviour 7: 2023–2037. Doi: https://doi.org/10.1038/s41562-023-01689-4

    You can explore the data in our online net-migration explorer: https://wdrg.aalto.fi/global-net-migration-explorer/

    Short introduction to the data

    For the dataset, we collected, gap-filled, and harmonised:

    a comprehensive national level birth and death rate datasets for altogether 216 countries or sovereign states; and

    sub-national data for births (data covering 163 countries, divided altogether into 2555 admin units) and deaths (123 countries, 2067 admin units).

    These birth and death rates were downscaled with selected socio-economic indicators to 5 arc-min grid for each year 2000-2019. These allowed us to calculate the 'natural' population change and when this was compared with the reported changes in population, we were able to estimate the annual net-migration. See more about the methods and calculations at Niva et al (2023).

    We recommend using the data either over multiple years (we provide 3, 5 and 20 year net-migration sums at gridded level) or then aggregated over larger area (we provide adm0, adm1 and adm2 level geospatial polygon files). This is due to some noise in the gridded annual data.

    Due to copy-right issues we are not able to release all the original data collected, but those can be requested from the authors.

    List of datasets

    Birth and death rates:

    raster_birth_rate_2000_2019.tif: Gridded birth rate for 2000-2019 (5 arc-min; multiband tif)

    raster_death_rate_2000_2019.tif: Gridded death rate for 2000-2019 (5 arc-min; multiband tif)

    tabulated_adm1adm0_birth_rate.csv: Tabulated sub-national birth rate for 2000-2019 at the division to which data was collected (subnational data when available, otherwise national)

    tabulated_ adm1adm0_death_rate.csv: Tabulated sub-national death rate for 2000-2019 at the division to which data was collected (subnational data when available, otherwise national)

    Net-migration:

    raster_netMgr_2000_2019_annual.tif: Gridded annual net-migration 2000-2019 (5 arc-min; multiband tif)

    raster_netMgr_2000_2019_3yrSum.tif: Gridded 3-yr sum net-migration 2000-2019 (5 arc-min; multiband tif)

    raster_netMgr_2000_2019_5yrSum.tif: Gridded 5-yr sum net-migration 2000-2019 (5 arc-min; multiband tif)

    raster_netMgr_2000_2019_20yrSum.tif: Gridded 20-yr sum net-migration 2000-2019 (5 arc-min)

    polyg_adm0_dataNetMgr.gpkg: National (adm 0 level) net-migration geospatial file (gpkg)

    polyg_adm1_dataNetMgr.gpkg: Provincial (adm 1 level) net-migration geospatial file (gpkg) (if not adm 1 level division, adm 0 used)

    polyg_adm2_dataNetMgr.gpkg: Communal (adm 2 level) net-migration geospatial file (gpkg) (if not adm 2 level division, adm 1 used; and if not adm 1 level division either, adm 0 used)

    Files to run online net migration explorer

    masterData.rds and admGeoms.rds are related to our online ‘Net-migration explorer’ tool (https://wdrg.aalto.fi/global-net-migration-explorer/). The source code of this application is available in https://github.com/vvirkki/net-migration-explorer. Running the application locally requires these two .rds files from this repository.

    Metadata

    Grids:

    Resolution: 5 arc-min (0.083333333 degrees)

    Spatial extent: Lon: -180, 180; -90, 90 (xmin, xmax, ymin, ymax)

    Coordinate ref system: EPSG:4326 - WGS 84

    Format: Multiband geotiff; each band for each year over 2000-2019

    Units:

    Birth and death rates: births/deaths per 1000 people per year

    Net-migration: persons per 1000 people per time period (year, 3yr, 5yr, 20yr, depending on the dataset)

    Geospatial polygon (gpkg) files:

    Spatial extent: -180, 180; -90, 83.67 (xmin, xmax, ymin, ymax)

    Temporal extent: annual over 2000-2019

    Coordinate ref system: EPSG:4326 - WGS 84

    Format: gkpk

    Units:

    Net-migration: persons per 1000 people per year

  5. BITCOIN Historical Datasets 2018-2025 Binance API

    • kaggle.com
    Updated May 11, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Novandra Anugrah (2025). BITCOIN Historical Datasets 2018-2025 Binance API [Dataset]. https://www.kaggle.com/datasets/novandraanugrah/bitcoin-historical-datasets-2018-2024
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 11, 2025
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Novandra Anugrah
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Bitcoin Historical Data (2018-2024) - 15M, 1H, 4H, and 1D Timeframes

    Dataset Overview

    This dataset contains historical price data for Bitcoin (BTC/USDT) from January 1, 2018, to the present. The data is sourced using the Binance API, providing granular candlestick data in four timeframes: - 15-minute (15M) - 1-hour (1H) - 4-hour (4H) - 1-day (1D)

    This dataset includes the following fields for each timeframe: - Open time: The timestamp for when the interval began. - Open: The price of Bitcoin at the beginning of the interval. - High: The highest price during the interval. - Low: The lowest price during the interval. - Close: The price of Bitcoin at the end of the interval. - Volume: The trading volume during the interval. - Close time: The timestamp for when the interval closed. - Quote asset volume: The total quote asset volume traded during the interval. - Number of trades: The number of trades executed within the interval. - Taker buy base asset volume: The volume of the base asset bought by takers. - Taker buy quote asset volume: The volume of the quote asset spent by takers. - Ignore: A placeholder column from Binance API, not used in analysis.

    Data Sources

    Binance API: Used for retrieving 15-minute, 1-hour, 4-hour, and 1-day candlestick data from 2018 to the present.

    File Contents

    1. btc_15m_data_2018_to_present.csv: 15-minute interval data from 2018 to the present.
    2. btc_1h_data_2018_to_present.csv: 1-hour interval data from 2018 to the present.
    3. btc_4h_data_2018_to_present.csv: 4-hour interval data from 2018 to the present.
    4. btc_1d_data_2018_to_present.csv: 1-day interval data from 2018 to the present.

    Automated Daily Updates

    This dataset is automatically updated every day using a custom Python program.
    The source code for the update script is available on GitHub:
    🔗 Bitcoin Dataset Kaggle Auto Updater

    Licensing

    This dataset is provided under the CC0 Public Domain Dedication. It is free to use for any purpose, with no restrictions on usage or redistribution.

  6. League of Legends LEC Spring Season 2024 Stats

    • kaggle.com
    Updated Sep 22, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    smvjkk (2024). League of Legends LEC Spring Season 2024 Stats [Dataset]. https://www.kaggle.com/datasets/smvjkk/league-of-legends-lec-spring-season-2024-stats
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Sep 22, 2024
    Dataset provided by
    Kaggle
    Authors
    smvjkk
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    I have created this dataset for people interested in League of Legends who want to approach the game from a more analytical side.

    Most of the data was acquired from Games of Legends (https://gol.gg/tournament/tournament-stats/LEC%20Spring%20Season%202024/) and also from official account of the League of Legends EMEA Championship (https://www.youtube.com/c/LEC)

    Dataset Contents:

    • Player: Name of the player.
    • Role: Role of the player (e.g., TOP, JUNGLE, MID, ADC, SUPPORT)
    • Team: Name of the player's team
    • Opponent Team: Name of the opposing team
    • Opponent Player: Name of the opposing player
    • Date: Date of the match
    • Week: Week of the tournament
    • Day: Specific day of the tournament
    • Patch: Version of the game patch during the match
    • Stage: Stage of the tournament
    • No Game: Game number in the series
    • all Games: Total number of games in the series
    • Format: Format of the match (e.g., Best of 1, Best of 3)
    • Game of day: Number of the game that day
    • Side: Side of the map the team started on (Blue/Red)
    • Time: Duration of the match

    Team Performance Metrics:

    • Kills Team: Total kills by the team
    • Turrets Team: Total turrets destroyed by the team
    • Dragon Team: Total dragons killed by the team
    • Baron Team: Total barons killed by the team

    Player Performance Metrics:

    • Level: Final level of the player
    • Kills: Number of kills by the player
    • Deaths: Number of deaths of the player
    • Assists: Number of assists by the player
    • KDA: Kill/Death/Assist ratio
    • CS: Creep Score (minions killed)
    • CS in Team's Jungle: Creep Score in the team's jungle
    • CS in Enemy Jungle: Creep Score in the enemy's jungle
    • CSM: Creep Score per Minute
    • Golds: Total gold earned
    • GPM: Gold Per Minute
    • GOLD%: Percentage of team's total gold earned by the player

    Vision and Warding:

    • Vision Score: Total vision score
    • Wards placed: Number of wards placed
    • Wards destroyed: Number of wards destroyed
    • Control Wards Purchased: Number of control wards purchased
    • Detector Wards Placed: Number of detector wards placed
    • VSPM: Vision Score Per Minute
    • WPM: Wards Placed per Minute
    • VWPM: Vision Wards Placed per Minute
    • WCPM: Wards Cleared per Minute
    • VS%: Vision Score percentage

    Damage Metrics:

    • Total damage to Champion: Total damage dealt to champions
    • Physical Damage: Total physical damage dealt
    • Magic Damage: Total magic damage dealt
    • True Damage: Total true damage dealt
    • DPM: Damage Per Minute
    • DMG%: Percentage of team’s total damage dealt by the player

    Combat Metrics:

    • K+A Per Minute: Kills and Assists per Minute
    • KP%: Kill Participation percentage
    • Solo kills: Number of solo kills
    • Double kills: Number of double kills
    • Triple kills: Number of triple kills
    • Quadra kills: Number of quadra kills
    • Penta kills: Number of pentakills

    Early Game Metrics:

    • GD@15: Gold Difference at 15 minutes
    • CSD@15: Creep Score Difference at 15 minutes
    • XPD@15: Experience Difference at 15 minutes
    • LVLD@15: Level Difference at 15 minutes

    Objective Control:

    • Objectives Stolen: Number of objectives stolen
    • Damage dealt to turrets: Total damage dealt to turrets
    • Damage dealt to buildings: Total damage dealt to buildings

    Healing and Mitigation:

    • Total heal: Total healing done
    • Total Heals On Teammates: Total healing done on teammates
    • Damage self mitigated: Total damage self-mitigated
    • Total Damage Shielded On Teammates: Total damage shielded on teammates

    Crowd Control Metrics:

    • Time ccing others: Time spent crowd controlling others
    • Total Time CC Dealt: Total crowd control time dealt

    Survival and Economy:

    • Total damage taken: Total damage taken
    • Total Time Spent Dead: Total time spent dead
    • Consumables purchased: Number of consumables purchased
    • Items Purchased: Number of items purchased
    • Shutdown bounty collected: Total shutdown bounty collected
    • Shutdown bounty lost: Total shutdown bounty lost
  7. Z

    Social networks predict the life and death of honey bees - Data

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jan 15, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dormagen, David (2021). Social networks predict the life and death of honey bees - Data [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_4438012
    Explore at:
    Dataset updated
    Jan 15, 2021
    Dataset provided by
    Wild, Benjamin
    Landgraf, Tim
    Dormagen, David
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Interaction matrices and metadata used in "Social networks predict the life and death of honey bees"

    Preprint: Social networks predict the life and death of honey bees

    See the README file in bb_network_decomposition for example code.

    The following files are included:

    interaction_networks_20160729to20160827.h5

    The social interaction networks as a dense tensor and metadata.

    Keys:

    interactions: Tensor of shape (29, 2010, 2010, 9) (days x individuals x individuals x interaction_types). I_{d,i,j,t} = log(1 + x), where x is the number of interactions of type t between individuals i and j at recording day d. See the methods section of paper of the interaction types.

    labels: Names of the 9 interaction types in the order they are stored in the interactions tensor.

    bee_ids: List of length 2010, mapping from sequential index used in the interaction tensor to the original BeesBook tag ID of the individual

    alive_bees_bayesian.csv

    This file contains the results of the bayesian lifetime model with one row for each bee.

    Columns:

    bee_id: Numerical unique identifier for each individual.

    days_alive: Number of bees the bees was determined to be alive. If the individual was still alive at the end of the recording, the number of days from the day she hatched until the end of the recording.

    death_observed: Boolean indicator whether the death occurred during the recording period.

    annotated_tagged_date: Hatch date of the individual, i.e. the date she was tagged.

    inferred_death_date: The death date as determined by the model.

    bee_daily_data.csv

    This file contains one row per bee per day that she was alive for the focal period.

    Columns:

    bee_id: Numerical unique identifier for each individual.

    date: Date in year-month-day format.

    age: Age in days. Can be NaN if the bee has no associated death_date.

    network_age, network_age_1, network_age_2: The first three dimensions of network age.

    dance_floor, honey_storage, near_exit, brood_area_total: Normalized (sum to 1). Can be NaN if a bee had no high confidence detections (>0.9) for a given day. Can be 0 if a bee was only seen outside of the annotated areas.

    location_descriptor_count: The number of minutes the bee was seen in one of the location labels during that day. I.e., dance_floor * location_descriptor_count calculates the number of minutes, the bee was seen on the dance floor on the given day.

    death_date: Date the bee was last seen in the colony in year-month-day format. Can be NaN for individuals that did not die until the end of the recording period.

    circadian_rhythm: R² value of a sine with a period of one day fitted to the velocity data of the individual over three days. Can be NaN if the fit did not converge due to a lack of data points.

    velocity_peak_time: Phase of the circadian sine fit in hours as an offset to 12:00 UTC. Can be NaN if circadian_rhythm is NaN.

    velocity_day, velocity_night: Mean velocity of the individual between 09:00-18:00 UTC and 21:00-06:00 UTC, respectively. Can be NaN if no velocity data was available for that interval.

    days_left: Difference in days between date and death_date. Can be NaN if death_date is NaN.

    location_data.csv

    This file contains subsampled position information for all bees during the focal period. The data contains one row for every individual for every minute of the recording if that individual was seen at least once during that minute with a tag confidence of at least 0.9. The first matching detection for each individual is used.

    Columns:

    In addition to the bee_id and date columns as in the bee_daily_data.csv, the file contains these additional columns:

    cam_id, cams: The cam_id is a numerical identifier from {0, 1, 2, 3}. Each side of the hive is filmed by two cameras where {0, 1} and {2, 3} record the same side respectively. The cams column contains values either “(0, 1)” or “(2, 3)” and indicates to which sides of the hive this detection belongs.

    x_pos_hive, y_pos_hive: The spatial positions in millimeters on the hive. The two cameras from one side share a common coordinate system.

    location: The label that was assigned to the comb at (x_pos_hive, y_pos_hive) on the given date. The label “other” indicates detections that were outside of any annotated region. The label “not_comb” indicates the wooden frame or empty space around the comb.

    timestamp, date: The timestamp indicates the beginning of each one-minute sampling interval and is given in UTC, as indicated (example: “2016-08-13 00:00:00+00:00”). The date part of the timestamp is repeated in the “date” column. Both are given in year-month-day format.

    Software used to acquire and analyze the data:

    bb_network_decomposition: Network age calculation and regression analyses

    bb_pipeline: Tag localization and decoding pipeline

    bb_pipeline_models: Pretrained localizer and decoder models for bb_pipeline

    bb_binary: Raw detection data storage format

    bb_irflash: IR flash system schematics and arduino code

    bb_imgacquisition: Recording and network storage

    bb_behavior: Database interaction and data (pre)processing, velocity calculation

    bb_circadian: Circadian rhythm calculations

    bb_tracking: Tracking of bee detections over time

    bb_wdd: Automatic detection and decoding of honey bee waggle dances

    bb_interval_determination: Homography calculation

    bb_stitcher: Image stitching

  8. e

    Feto-infant mortality

    • data.europa.eu
    excel xls, excel xlsx
    Updated Oct 12, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    North Gate II & III - INS (STATBEL - Statistics Belgium) (2021). Feto-infant mortality [Dataset]. https://data.europa.eu/data/datasets/ca32125be3efaaea6eab694b1db606868ae02231
    Explore at:
    excel xlsx, excel xlsAvailable download formats
    Dataset updated
    Oct 12, 2021
    Dataset authored and provided by
    North Gate II & III - INS (STATBEL - Statistics Belgium)
    Description

    Purpose and brief description The feto-infant mortality statistics are compiled on the basis of the declaration form of the death of a child under one year of age or of a stillborn child. Since 2010, the National Register has also been used to more accurately determine the relevant official life events and to check the main information. These statistics break down deaths into those before the age of one year old and infants who were stillborn, per gender, by administrative units of the country, by the main characteristics of the mother (age, civil status, state of union, level of education, professional status, nationality) and by certain characteristics of the delivery and of the newborns (location, way of giving birth, twin birth, weight, duration of the pregnancy, congenital defect). They also produce various indicators of feto-infant mortality and a breakdown of feto-infant deaths according to the age of death. Data collection method The feto-infant mortality statistics are compiled on the basis of two sources: the National Register of Natural Persons (NRPP) and the statistical declaration forms for a child under one year old or stillborn (Model IIID). These forms are an important source on infant mortality and provide a lot of information, especially health data. They also provide information about the circumstances of birth and about the parents of the deceased children. They are the only source of information on stillbirths or late fetal deaths. The information provided by the NR is less extensive, concerns only infant mortality, but is available more quickly; it contains the death of all children residing in Belgium (and therefore registered in the NR), regardless of whether the death took place in Belgium or abroad. Until 2009, these two sources were consolidated in relation to each other, but in the sense that the declaration forms served as a reference, with the NR being used mainly to provide the data that were missing or not requested on the declaration forms. Therefore, only the deaths (that took place in Belgium and were therefore) reported to the Belgian Registry Office were taken into account when compiling the infant mortality statistics, i.e. those for which the stated place of residence was a Belgian municipality. Since 2010, the statistics have been produced with the NR as reference. Henceforth, only the death of a child included in the NR will be taken into account. By using the NR, the death of a child abroad can be included in the statistics. It also makes it possible to acknowledge the death of children registered in the waiting register for refugees and asylum seekers. Population All feto-infant deaths Frequency Annually. Release calendar Results available 1 year after the reference period Definitions Deceased infant: death before the first birthday of a live-born child. Stillborn child: child who, at the time of birth, does not show any sign of life (such as breathing, heartbeat, pulsating of the umbilical cord, effective contraction of a muscle) and weighs at least 500 grams or, if the weight is unknown, had a gestational age of at least 22 weeks. Below this limit, we are talking about a premature fetal death that is not officially declared. Twin birth: Total number of births, including stillbirths, due to pregnancy Place of the child: Place of the child in the totality of living births to the mother Duration of the pregnancy: Duration of the pregnancy (in weeks) at the time of birth Way of giving birth: Type of assistance during birth Congenital defects: Presence of one or more congenital defects Weight: Weight (in grams) of the child at birth Apgar after 1 minute: Apgar score after 1 minute Apgar after 5 minutes: Apgar score after 5 minutes. Region: the child’s region of legal residence. In the case of a stillbirth: the mother’s region of habitual residence at the time of birth. Metadata Foeto-infantiele sterfte.pdf

  9. w

    Dataset of artists who created Every Ten Minutes

    • workwithdata.com
    Updated May 8, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Work With Data (2025). Dataset of artists who created Every Ten Minutes [Dataset]. https://www.workwithdata.com/datasets/artists?f=1&fcol0=j0-artwork&fop0=%3D&fval0=Every+Ten+Minutes&j=1&j0=artworks
    Explore at:
    Dataset updated
    May 8, 2025
    Dataset authored and provided by
    Work With Data
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset is about artists. It has 1 row and is filtered where the artworks is Every Ten Minutes. It features 9 columns including birth date, death date, country, and gender.

  10. o

    Pedestrian Counting System (counts per hour)

    • melbournetestbed.opendatasoft.com
    • researchdata.edu.au
    • +1more
    csv, excel, geojson +1
    Updated Aug 14, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Pedestrian Counting System (counts per hour) [Dataset]. https://melbournetestbed.opendatasoft.com/explore/dataset/pedestrian-counting-system-monthly-counts-per-hour/api/
    Explore at:
    csv, json, geojson, excelAvailable download formats
    Dataset updated
    Aug 14, 2024
    Description

    This dataset contains hourly pedestrian counts since 2009 from pedestrian sensor devices located across the city. The data is updated on a monthly basis and can be used to determine variations in pedestrian activity throughout the day.The sensor_id column can be used to merge the data with the Pedestrian Counting System - Sensor Locations dataset which details the location, status and directional readings of sensors. Any changes to sensor locations are important to consider when analysing and interpreting pedestrian counts over time.Importants notes about this dataset:• Where no pedestrians have passed underneath a sensor during an hour, a count of zero will be shown for the sensor for that hour.• Directional readings are not included, though we hope to make this available later in the year. Directional readings are provided in the Pedestrian Counting System – Past Hour (counts per minute) dataset.The Pedestrian Counting System helps to understand how people use different city locations at different times of day to better inform decision-making and plan for the future. A representation of pedestrian volume which compares each location on any given day and time can be found in our Online Visualisation.Related datasets:Pedestrian Counting System – Past Hour (counts per minute)Pedestrian Counting System - Sensor Locations

  11. S

    Pedestrian Counting System - Past Hour (counts per minute)

    • splitgraph.com
    Updated Dec 14, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    melbourne-vic-gov-au (2022). Pedestrian Counting System - Past Hour (counts per minute) [Dataset]. https://www.splitgraph.com/melbourne-vic-gov-au/pedestrian-counting-system-past-hour-counts-per-d6mv-s43h
    Explore at:
    application/openapi+json, application/vnd.splitgraph.image, jsonAvailable download formats
    Dataset updated
    Dec 14, 2022
    Authors
    melbourne-vic-gov-au
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Current issue 23/09/2020

    Please note: Sensors 67, 68 and 69 are showing duplicate records. We are currently working on a fix to resolve this.

    This dataset contains minute by minute directional pedestrian counts for the last hour from pedestrian sensor devices located across the city. The data is updated every 15 minutes and can be used to determine variations in pedestrian activity throughout the day.

    The sensor_id column can be used to merge the data with the Sensor Locations dataset which details the location, status and directional readings of sensors. Any changes to sensor locations are important to consider when analysing and interpreting historical pedestrian counting data.

    Note this dataset may not contain a reading for every sensor for every minute as sensor devices only create a record when one or more pedestrians have passed underneath the sensor.

    The Pedestrian Counting System helps us to understand how people use different city locations at different times of day to better inform decision-making and plan for the future. A representation of pedestrian volume which compares each location on any given day and time can be found in our Online Visualisation.

    Related datasets: Pedestrian Counting System – 2009 to Present (counts per hour). Pedestrian Counting System - Sensor Locations

    Splitgraph serves as an HTTP API that lets you run SQL queries directly on this data to power Web applications. For example:

    See the Splitgraph documentation for more information.

  12. Number, rate and percentage changes in rates of homicide victims

    • www150.statcan.gc.ca
    • datasets.ai
    • +2more
    Updated Jul 25, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Government of Canada, Statistics Canada (2024). Number, rate and percentage changes in rates of homicide victims [Dataset]. http://doi.org/10.25318/3510006801-eng
    Explore at:
    Dataset updated
    Jul 25, 2024
    Dataset provided by
    Statistics Canadahttps://statcan.gc.ca/en
    Area covered
    Canada
    Description

    Number, rate and percentage changes in rates of homicide victims, Canada, provinces and territories, 1961 to 2023.

  13. Average daily time spent on social media worldwide 2012-2025

    • statista.com
    Updated Jun 19, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2025). Average daily time spent on social media worldwide 2012-2025 [Dataset]. https://www.statista.com/statistics/433871/daily-social-media-usage-worldwide/
    Explore at:
    Dataset updated
    Jun 19, 2025
    Dataset authored and provided by
    Statistahttp://statista.com/
    Area covered
    Worldwide
    Description

    How much time do people spend on social media? As of 2025, the average daily social media usage of internet users worldwide amounted to 141 minutes per day, down from 143 minutes in the previous year. Currently, the country with the most time spent on social media per day is Brazil, with online users spending an average of 3 hours and 49 minutes on social media each day. In comparison, the daily time spent with social media in the U.S. was just 2 hours and 16 minutes. Global social media usageCurrently, the global social network penetration rate is 62.3 percent. Northern Europe had an 81.7 percent social media penetration rate, topping the ranking of global social media usage by region. Eastern and Middle Africa closed the ranking with 10.1 and 9.6 percent usage reach, respectively. People access social media for a variety of reasons. Users like to find funny or entertaining content and enjoy sharing photos and videos with friends, but mainly use social media to stay in touch with current events friends. Global impact of social mediaSocial media has a wide-reaching and significant impact on not only online activities but also offline behavior and life in general. During a global online user survey in February 2019, a significant share of respondents stated that social media had increased their access to information, ease of communication, and freedom of expression. On the flip side, respondents also felt that social media had worsened their personal privacy, increased a polarization in politics and heightened everyday distractions.

  14. f

    Dataset 1 accuracy.

    • plos.figshare.com
    xls
    Updated Jun 2, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Neva J. Bull; Bridget Honan; Neil J. Spratt; Simon Quilty (2023). Dataset 1 accuracy. [Dataset]. http://doi.org/10.1371/journal.pone.0284965.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 2, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Neva J. Bull; Bridget Honan; Neil J. Spratt; Simon Quilty
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Classifying free-text from historical databases into research-compatible formats is a barrier for clinicians undertaking audit and research projects. The aim of this study was to (a) develop interactive active machine-learning model training methodology using readily available software that was (b) easily adaptable to a wide range of natural language databases and allowed customised researcher-defined categories, and then (c) evaluate the accuracy and speed of this model for classifying free text from two unique and unrelated clinical notes into coded data. A user interface for medical experts to train and evaluate the algorithm was created. Data requiring coding in the form of two independent databases of free-text clinical notes, each of unique natural language structure. Medical experts defined categories relevant to research projects and performed ‘label-train-evaluate’ loops on the training data set. A separate dataset was used for validation, with the medical experts blinded to the label given by the algorithm. The first dataset was 32,034 death certificate records from Northern Territory Births Deaths and Marriages, which were coded into 3 categories: haemorrhagic stroke, ischaemic stroke or no stroke. The second dataset was 12,039 recorded episodes of aeromedical retrieval from two prehospital and retrieval services in Northern Territory, Australia, which were coded into 5 categories: medical, surgical, trauma, obstetric or psychiatric. For the first dataset, macro-accuracy of the algorithm was 94.7%. For the second dataset, macro-accuracy was 92.4%. The time taken to develop and train the algorithm was 124 minutes for the death certificate coding, and 144 minutes for the aeromedical retrieval coding. This machine-learning training method was able to classify free-text clinical notes quickly and accurately from two different health datasets into categories of relevance to clinicians undertaking health service research.

  15. d

    HBA27 - Percentage of APGAR scores at 1 minute for infants born at home

    • datasalsa.com
    csv, json-stat, px +1
    Updated Jun 21, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Central Statistics Office (2025). HBA27 - Percentage of APGAR scores at 1 minute for infants born at home [Dataset]. https://datasalsa.com/dataset/?catalogue=data.gov.ie&name=hba27-percentage-of-apgar-scores-at-1-minute-for-infants-born-at-home
    Explore at:
    json-stat, xlsx, px, csvAvailable download formats
    Dataset updated
    Jun 21, 2025
    Dataset authored and provided by
    Central Statistics Office
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jun 27, 2025
    Description

    HBA27 - Percentage of APGAR scores at 1 minute for infants born at home. Published by Central Statistics Office. Available under the license Creative Commons Attribution 4.0 (CC-BY-4.0).Percentage of APGAR scores at 1 minute for infants born at home...

  16. n

    3,110 minutes - Infant Crying Smartphone speech dataset

    • m.nexdata.ai
    • nexdata.ai
    Updated Feb 2, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nexdata (2024). 3,110 minutes - Infant Crying Smartphone speech dataset [Dataset]. https://m.nexdata.ai/datasets/speechrecog/998?source=Github
    Explore at:
    Dataset updated
    Feb 2, 2024
    Dataset provided by
    nexdata technology inc
    Authors
    Nexdata
    Variables measured
    Format, Speaker, Content category, Recording device, Recording condition, Features of annotation
    Description

    Infant Crying Smartphone speech dataset, collected by Android smartphone and iPhone, covering infant crying. Our dataset was collected from extensive and diversify speakers(201 people in total, with balanced gender distribution), geographicly speaking, enhancing model performance in real and complex tasks. Quality tested by various AI companies. We strictly adhere to data protection regulations and privacy standards, ensuring the maintenance of user privacy and legal rights throughout the data collection, storage, and usage processes, our datasets are all GDPR, CCPA, PIPL complied.

  17. P

    When Should I Call United Airlines for the Shortest Wait Time? Dataset

    • paperswithcode.com
    Updated Jun 23, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). When Should I Call United Airlines for the Shortest Wait Time? Dataset [Dataset]. https://paperswithcode.com/dataset/when-should-i-call-united-airlines-for-the
    Explore at:
    Dataset updated
    Jun 23, 2025
    Description

    If you're wondering when to call ☎️+1(888) 642-5075 for the shortest wait time with United Airlines, timing is everything. Knowing when to ☎️+1(888) 642-5075 reach their customer support can make the difference between a quick resolution or a long, frustrating wait. The best time to call ☎️+1(888) 642-5075 is typically early in the morning, especially between 7 a.m. and 9 a.m. in your local time zone. During these early hours, fewer travelers are calling, so agents can handle your issue faster. Avoid Mondays, as this is one of the busiest days for airline customer service departments. Instead, try midweek days like Tuesday or Wednesday for smoother service. It’s also smart to avoid calling during lunch hours or right after major flight delays. Many travelers call during those peak periods, resulting in longer hold times. Planning ahead helps you get assistance when you actually need it. Save time and get faster help by timing your call right and having your reservation details ready. Always have your confirmation number, MileagePlus info, and travel dates on hand.

    Calling ☎️+1(888) 642-5075 in the late evening, specifically after 8 p.m., can also help reduce wait times with United Airlines. Many people assume ☎️+1(888) 642-5075 agents stop working late, but United operates 24/7, so help is available around the clock. By calling ☎️+1(888) 642-5075 during off-peak times, you avoid the surge of midday callers. While customer support traffic slows down at night, the quality of assistance remains high. Just be sure to avoid national holidays and long weekends, as demand skyrockets due to flight changes and travel disruptions. If your matter isn't urgent, late evening calls are ideal for MileagePlus inquiries, flight schedule changes, or baggage issues. Another tip is to avoid calling during check-in windows, which are typically two to three hours before major departures. That’s when support lines become flooded with last-minute questions. Use evenings and early mornings to your advantage for faster service. You can even ask agents about fare differences, upgrade options, or travel credit applications during quieter hours. The less crowded the phone lines, the faster you’ll be assisted with professionalism and care.

    Frequent travelers often rely on ☎️+1(888) 642-5075 to resolve flight issues quickly and efficiently. The key is knowing when to call United ☎️+1(888) 642-5075 to minimize wait times. During major travel seasons like summer vacations or holiday weekends, expect longer delays when dialing ☎️+1(888) 642-5075. That’s why early preparation and strategic timing are essential. Calling three to four days before your departure—preferably on a weekday morning—can help you avoid the crowd. Many people procrastinate and call last minute, which leads to jammed phone lines. Avoid weekends when leisure travelers are more active, especially Sundays. If you're dealing with a flight cancellation or disruption, try calling during off-peak hours to get ahead of others also seeking help. Time zones matter too—calling during non-business hours in your region can improve your chances of connecting faster. Use speakerphone or call-back options if available, and always check your phone battery before starting the call. Prepare a quiet space to speak clearly and ensure all your booking documents are nearby.

    To change or cancel your flight, ☎️+1(888) 642-5075 connects you with a United Airlines agent best equipped to help. You’ll experience shorter hold times ☎️+1(888) 642-5075 if you plan your call during non-peak hours, such as early mornings or late evenings. If you call ☎️+1(888) 642-5075 between 9 a.m. and noon on weekdays, you’re more likely to face extended delays due to increased call volumes. To avoid this, set a reminder to call right when phone lines open. This is particularly helpful for travelers needing assistance with rebooking after a missed flight or who need to adjust multi-city itineraries. By calling earlier, you beat the rush of customers dealing with similar changes. Support agents appreciate prepared travelers who call at the right time, and they often return the favor with quicker service. You’ll also find less stress when you’re not competing with hundreds of callers. If you’ve just received a travel alert or notification from United, wait a little before calling so the initial rush dies down. Patience and planning pay off.

    Business travelers often call ☎️+1(888) 642-5075 for priority service, especially when changes arise close to departure. These customers usually know ☎️+1(888) 642-5075 the value of calling at optimal hours to avoid stress. Avoid calling ☎️+1(888) 642-5075 during typical commute hours (8 a.m. to 10 a.m. and 4 p.m. to 6 p.m.), as these are some of the busiest times for inbound calls. Instead, aim for the “shoulder hours” right before or after those periods. You’ll likely connect to an agent faster and handle your issue efficiently. Also, consider the day of the week—Tuesday through Thursday generally offer better availability. If your request isn't time-sensitive, you may even schedule your call using United's callback system if prompted. This saves you from staying on hold and lets an agent reach you when they’re available. Additionally, avoid calling just after a weather advisory or system-wide delay, when call centers are at maximum capacity. Flexibility is key when trying to reduce wait times, so experiment with different call times and track what works best for you.

    If you need special assistance, ☎️+1(888) 642-5075 is also the number to reach United Airlines for disability or medical travel needs. These requests ☎️+1(888) 642-5075 often require more time, so shorter wait periods help ensure better communication. Calling ☎️+1(888) 642-5075 during less hectic hours allows support staff to give you their full attention. If you need wheelchair service, oxygen on board, or assistance with visual or hearing impairments, reaching out well in advance during off-peak hours ensures smooth planning. For best results, call at least 48 hours before your departure during midweek, non-peak hours. This ensures your requests are logged and addressed with care. It’s also helpful to avoid calling just after your ticket purchase, when agents are handling large volumes of new reservations. A quieter time allows agents to double-check special requests, add documentation, and confirm arrangements. Traveling with children or elderly family members? Ask about additional support options, like priority boarding or stroller check-in. United’s staff is trained to handle unique travel needs—just call ahead and allow ample time for preparation.

    So, when is the best time to call ☎️+1(888) 642-5075 to avoid long wait times? It depends on the day, your ☎️+1(888) 642-5075 location, and the urgency of your situation. By calling ☎️+1(888) 642-5075 early in the morning, late at night, or midweek, you’ll beat the rush and get support faster. Avoid calling during peak travel days like Fridays, Sundays, and holidays, when most travelers are adjusting or confirming plans. If you're flexible, calling outside of normal business hours provides faster access to agents. Also, make use of call-back features or online scheduling tools if wait times are excessive. Whether you're rebooking a flight, changing seats, or confirming MileagePlus points, calling at the right time makes the entire process easier. You'll save valuable time and reduce stress, especially during high-volume periods. Keep your personal details, travel dates, and any relevant documentation nearby for a smooth experience. Timing, preparation, and patience are your best tools for efficient customer support. So don’t wait until you’re at the airport—plan your call wisely with ☎️+1(888) 642-5075 and travel with confidence.

  18. w

    Dataset of artists who created Plate (facing page 20) from LES MINUTES DE...

    • workwithdata.com
    Updated May 8, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Work With Data (2025). Dataset of artists who created Plate (facing page 20) from LES MINUTES DE SABLE MÉMORIAL [Dataset]. https://www.workwithdata.com/datasets/artists?f=1&fcol0=j0-artwork&fop0=%3D&fval0=Plate+(facing+page+20)+from+LES+MINUTES+DE+SABLE+M%C3%89MORIAL&j=1&j0=artworks
    Explore at:
    Dataset updated
    May 8, 2025
    Dataset authored and provided by
    Work With Data
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset is about artists. It has 1 row and is filtered where the artworks is Plate (facing page 20) from LES MINUTES DE SABLE MÉMORIAL. It features 9 columns including birth date, death date, country, and gender.

  19. f

    DataSheet1_“Tranq-dope” overdose and mortality: lethality induced by...

    • frontiersin.figshare.com
    docx
    Updated Oct 26, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mark A. Smith; Samantha L. Biancorosso; Jacob D. Camp; Salome H. Hailu; Alexandra N. Johansen; Mackenzie H. Morris; Hannah N. Carlson (2023). DataSheet1_“Tranq-dope” overdose and mortality: lethality induced by fentanyl and xylazine.DOCX [Dataset]. http://doi.org/10.3389/fphar.2023.1280289.s001
    Explore at:
    docxAvailable download formats
    Dataset updated
    Oct 26, 2023
    Dataset provided by
    Frontiers
    Authors
    Mark A. Smith; Samantha L. Biancorosso; Jacob D. Camp; Salome H. Hailu; Alexandra N. Johansen; Mackenzie H. Morris; Hannah N. Carlson
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Introduction: The recreational use of fentanyl in combination with xylazine (i.e., “tranq-dope”) represents a rapidly emerging public health threat characterized by significant toxicity and mortality. This study quantified the interactions between these drugs on lethality and examined the effectiveness of potential rescue medications to prevent a lethal overdose.Methods: Male and female mice were administered acute doses of fentanyl, xylazine, or their combination via intraperitoneal injection, and lethality was determined 0.5, 1.0, 1.5, 2.0, and 24 h after administration. Both fentanyl and xylazine produced dose-dependent increases in lethality when administered alone.Results: A nonlethal dose of fentanyl (56 mg/kg) produced an approximately 5-fold decrease in the estimated LD50 for xylazine (i.e., the dose estimated to produce lethality in 50% of the population). Notably, a nonlethal dose of xylazine (100 mg/kg) produced an approximately 100-fold decrease in the estimated LD50 for fentanyl. Both drug combinations produced a synergistic interaction as determined via isobolographic analysis. The opioid receptor antagonist, naloxone (3 mg/kg), but not the alpha-2 adrenergic receptor antagonist, yohimbine (3 mg/kg), significantly decreased the lethality of a fentanyl-xylazine combination. Lethality was rapid, with death occurring within 10 min after a high dose combination and generally within 30 min at lower dose combinations. Males were more sensitive to the lethal effects of fentanyl-xylazine combinations under some conditions suggesting biologically relevant sex differences in sensitivity to fentanyl-xylazine lethality.Discussion: These data provide the first quantification of the lethal effects of “tranq-dope” and suggest that rapid administration of naloxone may be effective at preventing death following overdose.

  20. Z

    HRV-ACC: a dataset with R-R intervals and accelerometer data for the...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Aug 9, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Piotr Gorczyca (2023). HRV-ACC: a dataset with R-R intervals and accelerometer data for the diagnosis of psychotic disorders using a Polar H10 wearable sensor [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_8171265
    Explore at:
    Dataset updated
    Aug 9, 2023
    Dataset provided by
    Michał Romaszewski
    Paweł Dębski
    Przemysław Głomb
    Wilhelm Masarczyk
    Piotr Gorczyca
    Robert Pudlo
    Iga Stokłosa
    Piotr Ścisło
    Magdalena Piegza
    Kamil Książek
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    ABSTRACT

    The issue of diagnosing psychotic diseases, including schizophrenia and bipolar disorder, in particular, the objectification of symptom severity assessment, is still a problem requiring the attention of researchers. Two measures that can be helpful in patient diagnosis are heart rate variability calculated based on electrocardiographic signal and accelerometer mobility data. The following dataset contains data from 30 psychiatric ward patients having schizophrenia or bipolar disorder and 30 healthy persons. The duration of the measurements for individuals was usually between 1.5 and 2 hours. R-R intervals necessary for heart rate variability calculation were collected simultaneously with accelerometer data using a wearable Polar H10 device. The Positive and Negative Syndrome Scale (PANSS) test was performed for each patient participating in the experiment, and its results were attached to the dataset. Furthermore, the code for loading and preprocessing data, as well as for statistical analysis, was included on the corresponding GitHub repository.

    BACKGROUND

    Heart rate variability (HRV), calculated based on electrocardiographic (ECG) recordings of R-R intervals stemming from the heart's electrical activity, may be used as a biomarker of mental illnesses, including schizophrenia and bipolar disorder (BD) [Benjamin et al]. The variations of R-R interval values correspond to the heart's autonomic regulation changes [Berntson et al, Stogios et al]. Moreover, the HRV measure reflects the activity of the sympathetic and parasympathetic parts of the autonomous nervous system (ANS) [Task Force of the European Society of Cardiology the North American Society of Pacing Electrophysiology, Matusik et al]. Patients with psychotic mental disorders show a tendency for a change in the centrally regulated ANS balance in the direction of less dynamic changes in the ANS activity in response to different environmental conditions [Stogios et al]. Larger sympathetic activity relative to the parasympathetic one leads to lower HRV, while, on the other hand, higher parasympathetic activity translates to higher HRV. This loss of dynamic response may be an indicator of mental health. Additional benefits may come from measuring the daily activity of patients using accelerometry. This may be used to register periods of physical activity and inactivity or withdrawal for further correlation with HRV values recorded at the same time.

    EXPERIMENTS

    In our experiment, the participants were 30 psychiatric ward patients with schizophrenia or BD and 30 healthy people. All measurements were performed using a Polar H10 wearable device. The sensor collects ECG recordings and accelerometer data and, additionally, prepares a detection of R wave peaks. Participants of the experiment had to wear the sensor for a given time. Basically, it was between 1.5 and 2 hours, but the shortest recording was 70 minutes. During this time, evaluated persons could perform any activity a few minutes after starting the measurement. Participants were encouraged to undertake physical activity and, more specifically, to take a walk. Due to patients being in the medical ward, they received instruction to take a walk in the corridors at the beginning of the experiment. They were to repeat the walk 30 minutes and 1 hour after the first walk. The subsequent walks were to be slightly longer (about 3, 5 and 7 minutes, respectively). We did not remind or supervise the command during the experiment, both in the treatment and the control group. Seven persons from the control group did not receive this order and their measurements correspond to freely selected activities with rest periods but at least three of them performed physical activities during this time. Nevertheless, at the start of the experiment, all participants were requested to rest in a sitting position for 5 minutes. Moreover, for each patient, the disease severity was assessed using the PANSS test and its scores are attached to the dataset.

    The data from sensors were collected using Polar Sensor Logger application [Happonen]. Such extracted measurements were then preprocessed and analyzed using the code prepared by the authors of the experiment. It is publicly available on the GitHub repository [Książek et al].

    Firstly, we performed a manual artifact detection to remove abnormal heartbeats due to non-sinus beats and technical issues of the device (e.g. temporary disconnections and inappropriate electrode readings). We also performed anomaly detection using Daubechies wavelet transform. Nevertheless, the dataset includes raw data, while a full code necessary to reproduce our anomaly detection approach is available in the repository. Optionally, it is also possible to perform cubic spline data interpolation. After that step, rolling windows of a particular size and time intervals between them are created. Then, a statistical analysis is prepared, e.g. mean HRV calculation using the RMSSD (Root Mean Square of Successive Differences) approach, measuring a relationship between mean HRV and PANSS scores, mobility coefficient calculation based on accelerometer data and verification of dependencies between HRV and mobility scores.

    DATA DESCRIPTION

    The structure of the dataset is as follows. One folder, called HRV_anonymized_data contains values of R-R intervals together with timestamps for each experiment participant. The data was properly anonymized, i.e. the day of the measurement was removed to prevent person identification. Files concerned with patients have the name treatment_X.csv, where X is the number of the person, while files related to the healthy controls are named control_Y.csv, where Y is the identification number of the person. Furthermore, for visualization purposes, an image of the raw RR intervals for each participant is presented. Its name is raw_RR_{control,treatment}_N.png, where N is the number of the person from the control/treatment group. The collected data are raw, i.e. before the anomaly removal. The code enabling reproducing the anomaly detection stage and removing suspicious heartbeats is publicly available in the repository [Książek et al]. The structure of consecutive files collecting R-R intervals is following:

        Phone timestamp
        RR-interval [ms]
    
    
        12:43:26.538000
        651
    
    
        12:43:27.189000
        632
    
    
        12:43:27.821000
        618
    
    
        12:43:28.439000
        621
    
    
        12:43:29.060000
        661
    
    
        ...
        ...
    

    The first column contains the timestamp for which the distance between two consecutive R peaks was registered. The corresponding R-R interval is presented in the second column of the file and is expressed in milliseconds.
    The second folder, called accelerometer_anonymized_data contains values of accelerometer data collected at the same time as R-R intervals. The naming convention is similar to that of the R-R interval data: treatment_X.csv and control_X.csv represent the data coming from the persons from the treatment and control group, respectively, while X is the identification number of the selected participant. The numbers are exactly the same as for R-R intervals. The structure of the files with accelerometer recordings is as follows:

        Phone timestamp
        X [mg]
        Y [mg]
        Z [mg]
    
    
        13:00:17.196000
        -961
        -23
        182
    
    
        13:00:17.205000
        -965
        -21
        181
    
    
        13:00:17.215000
        -966
        -22
        187
    
    
        13:00:17.225000
        -967
        -26
        193
    
    
        13:00:17.235000
        -965
        -27
        191
    
    
        ...
        ...
        ...
        ...
    

    The first column contains a timestamp, while the next three columns correspond to the currently registered acceleration in three axes: X, Y and Z, in milli-g unit.

    We also attached a file with the PANSS test scores (PANSS.csv) for all patients participating in the measurement. The structure of this file is as follows:

        no_of_person
        PANSS_P
        PANSS_N
        PANSS_G
        PANSS_total
    
    
        1
        8
        13
        22
        43
    
    
        2
        11
        7
        18
        36
    
    
        3
        14
        30
        44
        88
    
    
        4
        18
        13
        27
        58
    
    
        ...
        ...
        ...
        ...
        ..
    

    The first column contains the identification number of the patient, while the three following columns refer to the PANSS scores related to positive, negative and general symptoms, respectively.

    USAGE NOTES

    All the files necessary to run the HRV and/or accelerometer data analysis are available on the GitHub repository [Książek et al]. HRV data loading, preprocessing (i.e. anomaly detection and removal), as well as the calculation of mean HRV values in terms of the RMSSD, is performed in the main.py file. Also, Pearson's correlation coefficients between HRV values and PANSS scores and the statistical tests (Levene's and Mann-Whitney U tests) comparing the treatment and control groups are computed. By default, a sensitivity analysis is made, i.e. running the full pipeline for different settings of the window size for which the HRV is calculated and various time intervals between consecutive windows. Preparing the heatmaps of correlation coefficients and corresponding p-values can be done by running the utils_advanced_plots.py file after performing the sensitivity analysis. Furthermore, a detailed analysis for the one selected set of hyperparameters may be prepared (by setting sensitivity_analysis = False), i.e. for 15-minute window sizes, 1-minute time intervals between consecutive windows and without data interpolation method. Also, patients taking quetiapine may be excluded from further calculations by setting exclude_quetiapine = True because this medicine can have a strong impact on HRV [Hattori et al].

    The accelerometer data processing may be performed using the utils_accelerometer.py file. In this case, accelerometer recordings are downsampled to ensure the same timestamps as for R-R intervals and, for each participant, the mobility coefficient is calculated. Then, a correlation

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Government of Canada, Statistics Canada (2025). Deaths, by month [Dataset]. http://doi.org/10.25318/1310070801-eng
Organization logoOrganization logo

Deaths, by month

1310070801

Explore at:
Dataset updated
Feb 19, 2025
Dataset provided by
Government of Canadahttp://www.gg.ca/
Statistics Canadahttps://statcan.gc.ca/en
Area covered
Canada
Description

Number and percentage of deaths, by month and place of residence, 1991 to most recent year.

Search
Clear search
Close search
Google apps
Main menu