Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
BackgroundMethods for comparing hospitals regarding cardiac arrest (CA) outcomes, vital for improving resuscitation performance, rely on data collected by cardiac arrest registries. However, most CA patients are treated at hospitals that do not participate in such registries. This study aimed to determine whether CA risk standardization modeling based on administrative data could perform as well as that based on registry data.Methods and resultsTwo risk standardization logistic regression models were developed using 2453 patients treated from 2000–2015 at three hospitals in an academic health system. Registry and administrative data were accessed for all patients. The outcome was death at hospital discharge. The registry model was considered the “gold standard” with which to compare the administrative model, using metrics including comparing areas under the curve, calibration curves, and Bland-Altman plots. The administrative risk standardization model had a c-statistic of 0.891 (95% CI: 0.876–0.905) compared to a registry c-statistic of 0.907 (95% CI: 0.895–0.919). When limited to only non-modifiable factors, the administrative model had a c-statistic of 0.818 (95% CI: 0.799–0.838) compared to a registry c-statistic of 0.810 (95% CI: 0.788–0.831). All models were well-calibrated. There was no significant difference between c-statistics of the models, providing evidence that valid risk standardization can be performed using administrative data.ConclusionsRisk standardization using administrative data performs comparably to standardization using registry data. This methodology represents a new tool that can enable opportunities to compare hospital performance in specific hospital systems or across the entire US in terms of survival after CA.
Facebook
TwitterThe regional networking strategy is widely implemented in China as a normative policy aimed at fostering cohesion and enhancing competitiveness. However, the empirical basis for this strategy remains relatively weak due to limitations in measurement methods and data availability. This paper establishes the urban networks by the enterprise investment data, and then accurately measures the network’s external effects of each city by the method of MGWR model. The results show that: (1) Regional networking plays a significant role in urban development, although it is not the dominant factor. (2) The benefits of network connections may vary depending on the location and level of cities. (3) The major cities assume a pivotal role in the urban network. Based upon the aforementioned research conclusions, this paper presents strategic measures to enhance the network’s external impacts, aiming to offer insights for other regions in formulating regional development strategies and establishing regional urban networks.
Facebook
TwitterAccording to a survey conducted among stakeholders in the healthcare industry in the United States in 2020, ** percent of respondents indicated that lack of data standardization was the biggest challenge to health data sharing between payers and providers. Furthermore, a lack of technical interoperability and quality of data that is shared was each noted by ** percent of respondents.
Facebook
Twitterhttps://www.statsndata.org/how-to-orderhttps://www.statsndata.org/how-to-order
The Dairy Standardization Equipment market is integral to the dairy processing industry, playing a crucial role in maintaining the quality and consistency of dairy products. This specialized equipment is designed to standardize the fat and solids content in milk and cream, ensuring that the final products meet regul
Facebook
Twitterhttps://www.technavio.com/content/privacy-noticehttps://www.technavio.com/content/privacy-notice
Online Data Science Training Programs Market Size 2025-2029
The online data science training programs market size is forecast to increase by USD 8.67 billion, at a CAGR of 35.8% between 2024 and 2029.
The market is experiencing significant growth due to the increasing demand for data science professionals in various industries. The job market offers lucrative opportunities for individuals with data science skills, making online training programs an attractive option for those seeking to upskill or reskill. Another key driver in the market is the adoption of microlearning and gamification techniques in data science training. These approaches make learning more engaging and accessible, allowing individuals to acquire new skills at their own pace. Furthermore, the availability of open-source learning materials has democratized access to data science education, enabling a larger pool of learners to enter the field. However, the market also faces challenges, including the need for continuous updates to keep up with the rapidly evolving data science landscape and the lack of standardization in online training programs, which can make it difficult for employers to assess the quality of graduates. Companies seeking to capitalize on market opportunities should focus on offering up-to-date, high-quality training programs that incorporate microlearning and gamification techniques, while also addressing the challenges of continuous updates and standardization. By doing so, they can differentiate themselves in a competitive market and meet the evolving needs of learners and employers alike.
What will be the Size of the Online Data Science Training Programs Market during the forecast period?
Request Free SampleThe online data science training market continues to evolve, driven by the increasing demand for data-driven insights and innovations across various sectors. Data science applications, from computer vision and deep learning to natural language processing and predictive analytics, are revolutionizing industries and transforming business operations. Industry case studies showcase the impact of data science in action, with big data and machine learning driving advancements in healthcare, finance, and retail. Virtual labs enable learners to gain hands-on experience, while data scientist salaries remain competitive and attractive. Cloud computing and data science platforms facilitate interactive learning and collaborative research, fostering a vibrant data science community. Data privacy and security concerns are addressed through advanced data governance and ethical frameworks. Data science libraries, such as TensorFlow and Scikit-Learn, streamline the development process, while data storytelling tools help communicate complex insights effectively. Data mining and predictive analytics enable organizations to uncover hidden trends and patterns, driving innovation and growth. The future of data science is bright, with ongoing research and development in areas like data ethics, data governance, and artificial intelligence. Data science conferences and education programs provide opportunities for professionals to expand their knowledge and expertise, ensuring they remain at the forefront of this dynamic field.
How is this Online Data Science Training Programs Industry segmented?
The online data science training programs industry research report provides comprehensive data (region-wise segment analysis), with forecasts and estimates in 'USD million' for the period 2025-2029, as well as historical data from 2019-2023 for the following segments. TypeProfessional degree coursesCertification coursesApplicationStudentsWorking professionalsLanguageR programmingPythonBig MLSASOthersMethodLive streamingRecordedProgram TypeBootcampsCertificatesDegree ProgramsGeographyNorth AmericaUSMexicoEuropeFranceGermanyItalyUKMiddle East and AfricaUAEAPACAustraliaChinaIndiaJapanSouth KoreaSouth AmericaBrazilRest of World (ROW)
By Type Insights
The professional degree courses segment is estimated to witness significant growth during the forecast period.The market encompasses various segments catering to diverse learning needs. The professional degree course segment holds a significant position, offering comprehensive and in-depth training in data science. This segment's curriculum covers essential aspects such as statistical analysis, machine learning, data visualization, and data engineering. Delivered by industry professionals and academic experts, these courses ensure a high-quality education experience. Interactive learning environments, including live lectures, webinars, and group discussions, foster a collaborative and engaging experience. Data science applications, including deep learning, computer vision, and natural language processing, are integral to the market's growth. Data analysis, a crucial application, is gaining traction due to the increasing demand for data-driven decisio
Facebook
TwitterBy US Open Data Portal, data.gov [source]
This dataset contains stroke mortality data among US adults (35+) by state/territory and county. Learn more about the health of people within your own state or region, across genders and ethnicities. Reliable statistics even for small counties can be seen, thanks to 3-year averages, age-standardization, and spatial smoothing. Data sources such as the National Vital Statistics System give you all the data you need to get a detailed sense of your population's total cardiovascular health. With interactive maps created from this data also provided covering heart disease risks, death rates and hospital bed availability across each location in America, you can now gain a powerful perspective on how effective healthcare initiatives are making an impact in those who live there. Study up on the real cardiovascular conditions plaguing those around us today to make a real change in public health!
For more datasets, click here.
- 🚨 Your notebook can be here! 🚨!
This dataset contains stroke mortality data among US adults (35+) by state/territory and county. This data can be useful in helping identify areas where stroke mortality is high, and interventions to reduce mortality should be taken into account.
To access the dataset, you need to download it from Kaggle. The dataset consists of 18 columns including year, location description, geographic level, source of data, class of data values provided, topic of discussion with regard to stroke mortality rates (age-standardized), labels for stratification categories and stratifications used within the given age group when performing this analysis. The last 3 columns consist of geographical coordinates for each location (Y_lat & X_lon) as well as an overall georeferenced column (Georeferenced Column).
Once you have downloaded the dataset there are a few ways you can go about using it:
- You can perform a descriptive analysis on any particular column using methods such as summary statistics or distributions graphs;
- You can create your own maps or other visual representation based on the latitude/longitude columns;
- You could look at differences between states and counties/areas within states by subsetting out certain areas;
- Using statistical testing methods you could create inferential analyses that may lead to insights on why some areas seem more prone to higher levels of stroke mortality than others
- Track county-level stroke mortality trends among US adults (35+) over time.
- Identify regions of higher stroke mortality risk and use that information to inform targeted, preventative health policies and interventions.
- Analyze differences in stroke mortality rates by gender, race/ethnicity, or geographic location to identify potential disparities in care access or outcomes for certain demographic groups
If you use this dataset in your research, please credit the original authors. Data Source
Unknown License - Please check the dataset description for more information.
File: csv-1.csv | Column name | Description | |:-------------------------------|:---------------------------------------------------------| | Year | Year of the data. (Integer) | | LocationAbbr | Abbreviation of the state or territory. (String) | | LocationDesc | Name of the state or territory. (String) | | GeographicLevel | Level of geographic detail. (String) | | DataSource | Source of the data. (String) | | Class | Classification of the data. (String) | | Topic | Topic of the data. (String) | | Data_Value | Numeric value associated with the topic. (Float) | | Data_Value_Unit | Unit used to express the data value. (String) | | Data_Value_Type | Type of data value. (String) | | Data_Value_Footnote_Symbol | Symbol associated with the data value footnote. (String) | | StratificationCategory1 | First category of stratification. (String) | | Stratification1 | First stratifica...
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
This dataset is a comprehensive collection of over 3 million research paper titles and abstracts, curated and consolidated from multiple high-quality academic sources. The dataset provides a unified, clean, and standardized format for researchers, data scientists, and machine learning practitioners working on natural language processing, academic research analysis, and knowledge discovery tasks.
title and abstract columns| Metric | Value |
|---|---|
| Total Records | ~3,000,000+ |
| Columns | 2 (title, abstract) |
| File Size | 4.15 GB |
| Format | CSV |
| Duplicates | Removed |
| Missing Values | Removed |
cleaned_papers.csv
├── title (string): Scientific paper title
└── abstract (string): Scientific paper abstract
The dataset underwent a rigorous cleaning and standardization process:
title and abstract formatThis dataset is ideal for:
This dataset consolidates academic papers from the following sources:
This dataset represents a point-in-time consolidation. Future versions may include: - Additional academic sources - Extended fields (authors, publication dates, venues) - Domain-specific subsets - Enhanced metadata
Please respect the individual licenses of the source datasets. This consolidated version is provided for research and educational purposes. When using this dataset:
🙏 Acknowledgments
Special thanks to all the original dataset creators and the academic communities that make their research data publicly available. This work builds upon their valuable contributions to open science and knowledge sharing.
Keywords: academic papers, research abstracts, NLP, machine learning, text mining, scientific literature, ArXiv, PubMed, natural language processing, research dataset
Facebook
Twitterhttps://www.statsndata.org/how-to-orderhttps://www.statsndata.org/how-to-order
The International Organization for Standardization (ISO) roller chain sprocket market plays a crucial role in various industrial applications, serving as a key component in power transmission systems. Sprockets are essential for connecting roller chains to drive machinery efficiently, providing a reliable solution f
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
[instructions for use] 1. This data set is manually edited by Yidu cloud medicine according to the real medical record distribution; 2. This dataset is an example of the yidu-n7k dataset on openkg. Yidu-n7k dataset can only be used for academic research of natural language processing, not for commercial purposes. ———————————————— Yidu-n4k data set is derived from chip 2019 evaluation task 1, that is, the data set of "clinical terminology standardization task". The standardization of clinical terms is an indispensable task in medical statistics. Clinically, there are often hundreds of different ways to write about the same diagnosis, operation, medicine, examination, test and symptoms. The problem to be solved in Standardization (normalization) is to find the corresponding standard statement for various clinical statements. With the basis of terminology standardization, researchers can carry out subsequent statistical analysis of EMR. In essence, the task of clinical terminology standardization is also a kind of semantic similarity matching task. However, due to the diversity of original word expressions, a single matching model is difficult to achieve good results. Yidu cloud, a leading medical artificial intelligence technology company in the industry, is also the first Unicorn company to drive medical innovation solutions with data intelligence. With the mission of "data intelligence and green medical care" and the goal of "improving the relationship between human beings and diseases", Yidu cloud uses data artificial intelligence to help the government, hospitals and the whole industry fully tap the intelligent political and civil value of medical big data, and build a big data ecological platform for the medical industry that can cover the whole country, make overall utilization and unified access. Since its establishment in 2013, Yidu cloud has gathered world-renowned scientists and the best people in the professional field to form a strong talent team. The company has invested hundreds of millions of yuan in R & D and service system establishment every year, built a medical data intelligent platform with large data processing capacity, high data integrity and transparent development process, and has obtained more than dozens of software copyrights and national invention patents.
Facebook
Twitterhttps://www.statsndata.org/how-to-orderhttps://www.statsndata.org/how-to-order
The Normalizing Service market is an essential segment within the broader field of tools and services that aim to enhance data integrity and standardization across various industries. As organizations increasingly rely on data-driven decision-making, the demand for services that can transform raw data into standardi
Facebook
TwitterNumber of deaths, crude mortality rates and age standardized mortality rates (based on 2011 population) for selected grouped causes, by sex, 2000 to most recent year.
Facebook
TwitterThe human microbiome has emerged as a central research topic in human biology and biomedicine. Current microbiome studies generate high-throughput omics data across different body sites, populations, and life stages. Many of the challenges in microbiome research are similar to other high-throughput studies, the quantitative analyses need to address the heterogeneity of data, specific statistical properties, and the remarkable variation in microbiome composition across individuals and body sites. This has led to a broad spectrum of statistical and machine learning challenges that range from study design, data processing, and standardization to analysis, modeling, cross-study comparison, prediction, data science ecosystems, and reproducible reporting. Nevertheless, although many statistics and machine learning approaches and tools have been developed, new techniques are needed to deal with emerging applications and the vast heterogeneity of microbiome data. We review and discuss emerging applications of statistical and machine learning techniques in human microbiome studies and introduce the COST Action CA18131 “ML4Microbiome” that brings together microbiome researchers and machine learning experts to address current challenges such as standardization of analysis pipelines for reproducibility of data analysis results, benchmarking, improvement, or development of existing and new tools and ontologies.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
With the expansion of Internet of Things (IoT) devices, security is an important issue as attacks are constantly gaining more complex. Traditional attack detection methods in IoT systems have difficulty being able to process real-time and access limitations. To address these challenges, a stacking-based Tiny Machine Learning (TinyML) models has been proposed for attack detection in IoT networks. This ensures detection efficiently and without additional computational overhead. The experiments have been conducted using the publicly available ToN-IoT dataset, comprising a total of 461,008 labeled instances with 10 types of attacks categories. Some amount of data preprocessing has been done applying methods such as label encoding, feature selection, and data standardization. A stacking ensemble learning technique uses multiple models combining lightweight Decision Tree (DT) and small Neural Network (NN) to aggregate power of the system and generalize. The performance of the model is evaluated by accuracy, precision, recall, F1-score, specificity, and false positive rate (FPR). Experimental results demonstrate that the stacked TinyML model is superior to traditional ML methods in terms of efficiency and detection performance, and its accuracy rate is 99.98%. It has an average inference latency of 0.12 ms and an estimated power consumption of 0.01 mW.
Facebook
TwitterU.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
Note: DPH is updating and streamlining the COVID-19 cases, deaths, and testing data. As of 6/27/2022, the data will be published in four tables instead of twelve.
The COVID-19 Cases, Deaths, and Tests by Day dataset contains cases and test data by date of sample submission. The death data are by date of death. This dataset is updated daily and contains information back to the beginning of the pandemic. The data can be found at https://data.ct.gov/Health-and-Human-Services/COVID-19-Cases-Deaths-and-Tests-by-Day/g9vi-2ahj.
The COVID-19 State Metrics dataset contains over 93 columns of data. This dataset is updated daily and currently contains information starting June 21, 2022 to the present. The data can be found at https://data.ct.gov/Health-and-Human-Services/COVID-19-State-Level-Data/qmgw-5kp6 .
The COVID-19 County Metrics dataset contains 25 columns of data. This dataset is updated daily and currently contains information starting June 16, 2022 to the present. The data can be found at https://data.ct.gov/Health-and-Human-Services/COVID-19-County-Level-Data/ujiq-dy22 .
The COVID-19 Town Metrics dataset contains 16 columns of data. This dataset is updated daily and currently contains information starting June 16, 2022 to the present. The data can be found at https://data.ct.gov/Health-and-Human-Services/COVID-19-Town-Level-Data/icxw-cada . To protect confidentiality, if a town has fewer than 5 cases or positive NAAT tests over the past 7 days, those data will be suppressed.
COVID-19 cases and associated deaths that have been reported among Connecticut residents, broken down by race and ethnicity. All data in this report are preliminary; data for previous dates will be updated as new reports are received and data errors are corrected. Deaths reported to the either the Office of the Chief Medical Examiner (OCME) or Department of Public Health (DPH) are included in the COVID-19 update.
The following data show the number of COVID-19 cases and associated deaths per 100,000 population by race and ethnicity. Crude rates represent the total cases or deaths per 100,000 people. Age-adjusted rates consider the age of the person at diagnosis or death when estimating the rate and use a standardized population to provide a fair comparison between population groups with different age distributions. Age-adjustment is important in Connecticut as the median age of among the non-Hispanic white population is 47 years, whereas it is 34 years among non-Hispanic blacks, and 29 years among Hispanics. Because most non-Hispanic white residents who died were over 75 years of age, the age-adjusted rates are lower than the unadjusted rates. In contrast, Hispanic residents who died tend to be younger than 75 years of age which results in higher age-adjusted rates.
The population data used to calculate rates is based on the CT DPH population statistics for 2019, which is available online here: https://portal.ct.gov/DPH/Health-Information-Systems--Reporting/Population/Population-Statistics. Prior to 5/10/2021, the population estimates from 2018 were used.
Rates are standardized to the 2000 US Millions Standard population (data available here: https://seer.cancer.gov/stdpopulations/). Standardization was done using 19 age groups (0, 1-4, 5-9, 10-14, ..., 80-84, 85 years and older). More information about direct standardization for age adjustment is available here: https://www.cdc.gov/nchs/data/statnt/statnt06rv.pdf
Categories are mutually exclusive. The category “multiracial” includes people who answered ‘yes’ to more than one race category. Counts may not add up to total case counts as data on race and ethnicity may be missing. Age adjusted rates calculated only for groups with more than 20 deaths. Abbreviation: NH=Non-Hispanic.
Data on Connecticut deaths were obtained from the Connecticut Deaths Registry maintained by the DPH Office of Vital Records. Cause of death was determined by a death certifier (e.g., physician, APRN, medical examiner) using their best clinical judgment. Additionally, all COVID-19 deaths, including suspected or related, are required to be reported to OCME. On April 4, 2020, CT DPH and OCME released a joint memo to providers and facilities within Connecticut providing guidelines for certifying deaths due to COVID-19 that were consistent with the CDC’s guidelines and a reminder of the required reporting to OCME.25,26 As of July 1, 2021, OCME had reviewed every case reported and performed additional investigation on about one-third of reported deaths to better ascertain if COVID-19 did or did not cause or contribute to the death. Some of these investigations resulted in the OCME performing postmortem swabs for PCR testing on individuals whose deaths were suspected to be due to COVID-19, but antemortem diagnosis was unable to be made.31 The OCME issued or re-issued about 10% of COVID-19 death certificates and, when appropriate, removed COVID-19 from the death certificate. For standardization and tabulation of mortality statistics, written cause of death statements made by the certifiers on death certificates are sent to the National Center for Health Statistics (NCHS) at the CDC which assigns cause of death codes according to the International Causes of Disease 10th Revision (ICD-10) classification system.25,26 COVID-19 deaths in this report are defined as those for which the death certificate has an ICD-10 code of U07.1 as either a primary (underlying) or a contributing cause of death. More information on COVID-19 mortality can be found at the following link: https://portal.ct.gov/DPH/Health-Information-Systems--Reporting/Mortality/Mortality-Statistics
Data are subject to future revision as reporting changes.
Starting in July 2020, this dataset will be updated every weekday.
Additional notes: A delay in the data pull schedule occurred on 06/23/2020. Data from 06/22/2020 was processed on 06/23/2020 at 3:30 PM. The normal data cycle resumed with the data for 06/23/2020.
A network outage on 05/19/2020 resulted in a change in the data pull schedule. Data from 5/19/2020 was processed on 05/20/2020 at 12:00 PM. Data from 5/20/2020 was processed on 5/20/2020 8:30 PM. The normal data cycle resumed on 05/20/2020 with the 8:30 PM data pull. As a result of the network outage, the timestamp on the datasets on the Open Data Portal differ from the timestamp in DPH's daily PDF reports.
Starting 5/10/2021, the date field will represent the date this data was updated on data.ct.gov. Previously the date the data was pulled by DPH was listed, which typically coincided with the date before the data was published on data.ct.gov. This change was made to standardize the COVID-19 data sets on data.ct.gov.
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
This dataset is a ready-to-train CSV version of UniMiB-SHAR data. It converts the original acc_data.npy / acc_labels.npy into tidy CSV, adds a magnitude channel, and applies z-score standardization (using train statistics only). A stratified (by label) 80/10/10 split is provided with fixed seed for full reproducibility.
ID – sample identifier (each ID = one 3-second window)
t – time index in the window (0…150; 151 points @ 50 Hz for ~3 s)
ax, ay, az – standardized accelerations (z-scored with train stats)
mag – standardized magnitude, computed as sqrt(ax_raw² + ay_raw² + az_raw²) before standardization
label – integer class ID (same encoding as your source labels)
Raw data: UniMiB-SHAR . Please cite the original authors when using this dataset in publications.
This processed packaging/script is released under the same spirit of research use; please reference this Kaggle dataset if it helps your work.
Suggested citation for the original dataset (adapt as needed): Micucci, D., Mobilio, M., & Napoletano, P. (2017). UniMiB SHAR: A Dataset for Human Activity Recognition Using Acceleration Data from Smartphones. Applied Sciences, 7(10), 1101. https://doi.org/10.3390/app7101101
Facebook
TwitterOpen Government Licence - Canada 2.0https://open.canada.ca/en/open-government-licence-canada
License information was derived automatically
Deaths and mortality rate (age standardization using 2011 population), by selected grouped causes and sex, Canada, provinces and territories
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
The dataset titled "pokemon_dataset.csv" is a cleaned and consolidated version of a relational database initially created for visualizing Pokémon stats across typings and generations in Power BI. This dataset provides a single, well-structured table containing comprehensive Pokémon information, designed for ease of use in various data tools and platforms.
The primary goal of this dataset is to enable Pokémon fans, data analysts, and visualization enthusiasts to:
- Explore Pokémon stats across different generations and types.
- Build analytical projects and dashboards.
- Gain rich insights into the Pokémon universe.
The data was sourced from PokéAPI, an open and accessible RESTful API for Pokémon-related data.
- PokéAPI provides detailed information about Pokémon species, moves, abilities, stats, and more, making it a trusted resource for Pokémon datasets.
Standardization:
- Datasets from PokéAPI were standardized using a common pokemon_id to ensure consistency and compatibility.
Access Methodology:
- Data was accessed using Python's requests library to fetch JSON objects from PokéAPI.
- These objects were flattened, passed into SQL for relational mapping, and cleaned to produce the final dataset.
Optimization:
- The dataset is optimized for use in tools such as:
- Excel
- Google Sheets
- SQL
- pandas
- Power BI
The dataset consists of the following columns:
| Column Name | Description |
|---|---|
pokemon_id | A unique identifier for each Pokémon. |
name | The name of the Pokémon. |
primary_type | The primary type of the Pokémon (e.g., Fire, Water, Grass). |
secondary_type | The secondary type of the Pokémon (if applicable). |
first_appreance | The game in which the Pokémon first appeared (e.g., Red/Blue, Gold/Silver). |
generation | The generation to which the Pokémon belongs (e.g., Gen 1, Gen 2). |
category | The category of the Pokémon (e.g., Regular, Legendary, Mythical). |
total_base_stats | The sum of all the individual stats for a Pokémon. |
hp | The Pokémon's base HP stat. |
attack | The Pokémon's base Attack stat. |
defense | The Pokémon's base Defense stat. |
special_attack | The Pokémon's base Special Attack stat. |
special_defense | The Pokémon's base Special Defense stat. |
speed | The Pokémon's base Speed stat. |
This dataset is tailored for:
- Pokémon fans who love exploring data and gaining deeper insights into their favorite Pokémon.
- Data analysts and developers creating Pokémon-related projects.
Facebook
TwitterOpen Government Licence - Canada 2.0https://open.canada.ca/en/open-government-licence-canada
License information was derived automatically
Rank, number of deaths, percentage of deaths and age standardized mortality rates (based on 2021 estimated population) for leading causes of death, by sex, 2000 to most recent year.
Facebook
TwitterNumber of deaths, crude mortality rates and age standardized mortality rates (based on 2011 population) for selected grouped causes, by sex. Data are available beginning from 2000.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data for publication found here: https://doi.org/10.3390/toxics10050244
File 1: Raw data table consiting acute coral larvae bioassay data after oxybenzone exposure, as well as data for recruits.
File 2: Modified R code for estimating LC/EC50
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
BackgroundMethods for comparing hospitals regarding cardiac arrest (CA) outcomes, vital for improving resuscitation performance, rely on data collected by cardiac arrest registries. However, most CA patients are treated at hospitals that do not participate in such registries. This study aimed to determine whether CA risk standardization modeling based on administrative data could perform as well as that based on registry data.Methods and resultsTwo risk standardization logistic regression models were developed using 2453 patients treated from 2000–2015 at three hospitals in an academic health system. Registry and administrative data were accessed for all patients. The outcome was death at hospital discharge. The registry model was considered the “gold standard” with which to compare the administrative model, using metrics including comparing areas under the curve, calibration curves, and Bland-Altman plots. The administrative risk standardization model had a c-statistic of 0.891 (95% CI: 0.876–0.905) compared to a registry c-statistic of 0.907 (95% CI: 0.895–0.919). When limited to only non-modifiable factors, the administrative model had a c-statistic of 0.818 (95% CI: 0.799–0.838) compared to a registry c-statistic of 0.810 (95% CI: 0.788–0.831). All models were well-calibrated. There was no significant difference between c-statistics of the models, providing evidence that valid risk standardization can be performed using administrative data.ConclusionsRisk standardization using administrative data performs comparably to standardization using registry data. This methodology represents a new tool that can enable opportunities to compare hospital performance in specific hospital systems or across the entire US in terms of survival after CA.