MMWR Surveillance Summary 66 (No. SS-1):1-8 found that nonmetropolitan areas have significant numbers of potentially excess deaths from the five leading causes of death. These figures accompany this report by presenting information on potentially excess deaths in nonmetropolitan and metropolitan areas at the state level. They also add additional years of data and options for selecting different age ranges and benchmarks. Potentially excess deaths are defined in MMWR Surveillance Summary 66(No. SS-1):1-8 as deaths that exceed the numbers that would be expected if the death rates of states with the lowest rates (benchmarks) occurred across all states. They are calculated by subtracting expected deaths for specific benchmarks from observed deaths. Not all potentially excess deaths can be prevented; some areas might have characteristics that predispose them to higher rates of death. However, many potentially excess deaths might represent deaths that could be prevented through improved public health programs that support healthier behaviors and neighborhoods or better access to health care services. Mortality data for U.S. residents come from the National Vital Statistics System. Estimates based on fewer than 10 observed deaths are not shown and shaded yellow on the map. Underlying cause of death is based on the International Classification of Diseases, 10th Revision (ICD-10) Heart disease (I00-I09, I11, I13, and I20–I51) Cancer (C00–C97) Unintentional injury (V01–X59 and Y85–Y86) Chronic lower respiratory disease (J40–J47) Stroke (I60–I69) Locality (nonmetropolitan vs. metropolitan) is based on the Office of Management and Budget’s 2013 county-based classification scheme. Benchmarks are based on the three states with the lowest age and cause-specific mortality rates. Potentially excess deaths for each state are calculated by subtracting deaths at the benchmark rates (expected deaths) from observed deaths. Users can explore three benchmarks: “2010 Fixed” is a fixed benchmark based on the best performing States in 2010. “2005 Fixed” is a fixed benchmark based on the best performing States in 2005. “Floating” is based on the best performing States in each year so change from year to year. SOURCES CDC/NCHS, National Vital Statistics System, mortality data (see http://www.cdc.gov/nchs/deaths.htm); and CDC WONDER (see http://wonder.cdc.gov). REFERENCES Moy E, Garcia MC, Bastian B, Rossen LM, Ingram DD, Faul M, Massetti GM, Thomas CC, Hong Y, Yoon PW, Iademarco MF. Leading Causes of Death in Nonmetropolitan and Metropolitan Areas – United States, 1999-2014. MMWR Surveillance Summary 2017; 66(No. SS-1):1-8. Garcia MC, Faul M, Massetti G, Thomas CC, Hong Y, Bauer UE, Iademarco MF. Reducing Potentially Excess Deaths from the Five Leading Causes of Death in the Rural United States. MMWR Surveillance Summary 2017; 66(No. SS-2):1–7.
Reporting of Aggregate Case and Death Count data was discontinued on May 11, 2023, with the expiration of the COVID-19 public health emergency declaration. Although these data will continue to be publicly available, this dataset will no longer be updated.
The surveillance case definition for COVID-19, a nationally notifiable disease, was first described in a position statement from the Council for State and Territorial Epidemiologists, which was later revised. However, there is some variation in how jurisdictions implemented these case definitions. More information on how CDC collects COVID-19 case surveillance data can be found at FAQ: COVID-19 Data and Surveillance.
Aggregate Data Collection Process Since the beginning of the COVID-19 pandemic, data were reported from state and local health departments through a robust process with the following steps:
This process was collaborative, with CDC and jurisdictions working together to ensure the accuracy of COVID-19 case and death numbers. County counts provided the most up-to-date numbers on cases and deaths by report date. Throughout data collection, CDC retrospectively updated counts to correct known data quality issues.
Description This archived public use dataset focuses on the cumulative and weekly case and death rates per 100,000 persons within various sociodemographic factors across all states and their counties. All resulting data are expressed as rates calculated as the number of cases or deaths per 100,000 persons in counties meeting various classification criteria using the US Census Bureau Population Estimates Program (2019 Vintage).
Each county within jurisdictions is classified into multiple categories for each factor. All rates in this dataset are based on classification of counties by the characteristics of their population, not individual-level factors. This applies to each of the available factors observed in this dataset. Specific factors and their corresponding categories are detailed below.
Population-level factors Each unique population factor is detailed below. Please note that the “Classification” column describes each of the 12 factors in the dataset, including a data dictionary describing what each numeric digit means within each classification. The “Category” column uses numeric digits (2-6, depending on the factor) defined in the “Classification” column.
Metro vs. Non-Metro – “Metro_Rural” Metro vs. Non-Metro classification type is an aggregation of the 6 National Center for Health Statistics (NCHS) Urban-Rural classifications, where “Metro” counties include Large Central Metro, Large Fringe Metro, Medium Metro, and Small Metro areas and “Non-Metro” counties include Micropolitan and Non-Core (Rural) areas. 1 – Metro, including “Large Central Metro, Large Fringe Metro, Medium Metro, and Small Metro” areas 2 – Non-Metro, including “Micropolitan, and Non-Core” areas
Urban/rural - “NCHS_Class” Urban/rural classification type is based on the 2013 National Center for Health Statistics Urban-Rural Classification Scheme for Counties. Levels consist of:
1 Large Central Metro
2 Large Fringe Metro
3 Medium Metro
4 Small Metro
5 Micropolitan
6 Non-Core (Rural)
American Community Survey (ACS) data were used to classify counties based on their age, race/ethnicity, household size, poverty level, and health insurance status distributions. Cut points were generated by using tertiles and categorized as High, Moderate, and Low percentages. The classification “Percent non-Hispanic, Native Hawaiian/Pacific Islander” is only available for “Hawaii” due to low numbers in this category for other available locations. This limitation also applies to other race/ethnicity categories within certain jurisdictions, where 0 counties fall into the certain category. The cut points for each ACS category are further detailed below:
Age 65 - “Age65”
1 Low (0-24.4%) 2 Moderate (>24.4%-28.6%) 3 High (>28.6%)
Non-Hispanic, Asian - “NHAA”
1 Low (<=5.7%) 2 Moderate (>5.7%-17.4%) 3 High (>17.4%)
Non-Hispanic, American Indian/Alaskan Native - “NHIA”
1 Low (<=0.7%) 2 Moderate (>0.7%-30.1%) 3 High (>30.1%)
Non-Hispanic, Black - “NHBA”
1 Low (<=2.5%) 2 Moderate (>2.5%-37%) 3 High (>37%)
Hispanic - “HISP”
1 Low (<=18.3%) 2 Moderate (>18.3%-45.5%) 3 High (>45.5%)
Population in Poverty - “Pov”
1 Low (0-12.3%) 2 Moderate (>12.3%-17.3%) 3 High (>17.3%)
Population Uninsured- “Unins”
1 Low (0-7.1%) 2 Moderate (>7.1%-11.4%) 3 High (>11.4%)
Average Household Size - “HH”
1 Low (1-2.4) 2 Moderate (>2.4-2.6) 3 High (>2.6)
Community Vulnerability Index Value - “CCVI” COVID-19 Community Vulnerability Index (CCVI) scores are from Surgo Ventures, which range from 0 to 1, were generated based on tertiles and categorized as:
1 Low Vulnerability (0.0-0.4) 2 Moderate Vulnerability (0.4-0.6) 3 High Vulnerability (0.6-1.0)
Social Vulnerability Index Value – “SVI" Social Vulnerability Index (SVI) scores (vintage 2020), which also range from 0 to 1, are from CDC/ASTDR’s Geospatial Research, Analysis & Service Program. Cut points for CCVI and SVI scores were generated based on tertiles and categorized as:
1 Low Vulnerability (0-0.333) 2 Moderate Vulnerability (0.334-0.666) 3 High Vulnerability (0.667-1)
The CDC WONDER Mortality - Underlying Cause of Death online database is a county-level national mortality and population database spanning the years since 1979 -2008. The number of deaths, crude death rates, age-adjusted death rates, standard errors and 95% confidence intervals for death rates can be obtained by place of residence (total U.S., Census region, Census division, state, and county), age group (including infant age groups), race (years 1979-1998: White, Black, and Other; years 1999-2008: American Indian or Alaska Native, Asian or Pacific Islander, Black or African American, and White), Hispanic origin (years 1979-1998: not available; years 1999-present: Hispanic or Latino, not Hispanic or Latino, Not Stated), gender, year of death, and underlying cause of death (years 1979-1998: 4-digit ICD-9 code and 72 cause-of-death recode; years 1999-present: 4-digit ICD-10 codes and 113 cause-of-death recode, as well as the Injury Mortality matrix classification for Intent and Mechanism), and urbanization level of residence (2006 NCHS urban-rural classification scheme for counties). The Compressed Mortality data are produced by the National Center for Health Statistics.
Notice of data discontinuation: Since the start of the pandemic, AP has reported case and death counts from data provided by Johns Hopkins University. Johns Hopkins University has announced that they will stop their daily data collection efforts after March 10. As Johns Hopkins stops providing data, the AP will also stop collecting daily numbers for COVID cases and deaths. The HHS and CDC now collect and visualize key metrics for the pandemic. AP advises using those resources when reporting on the pandemic going forward.
April 9, 2020
April 20, 2020
April 29, 2020
September 1st, 2020
February 12, 2021
new_deaths
column.February 16, 2021
The AP is using data collected by the Johns Hopkins University Center for Systems Science and Engineering as our source for outbreak caseloads and death counts for the United States and globally.
The Hopkins data is available at the county level in the United States. The AP has paired this data with population figures and county rural/urban designations, and has calculated caseload and death rates per 100,000 people. Be aware that caseloads may reflect the availability of tests -- and the ability to turn around test results quickly -- rather than actual disease spread or true infection rates.
This data is from the Hopkins dashboard that is updated regularly throughout the day. Like all organizations dealing with data, Hopkins is constantly refining and cleaning up their feed, so there may be brief moments where data does not appear correctly. At this link, you’ll find the Hopkins daily data reports, and a clean version of their feed.
The AP is updating this dataset hourly at 45 minutes past the hour.
To learn more about AP's data journalism capabilities for publishers, corporations and financial institutions, go here or email kromano@ap.org.
Use AP's queries to filter the data or to join to other datasets we've made available to help cover the coronavirus pandemic
Filter cases by state here
Rank states by their status as current hotspots. Calculates the 7-day rolling average of new cases per capita in each state: https://data.world/associatedpress/johns-hopkins-coronavirus-case-tracker/workspace/query?queryid=481e82a4-1b2f-41c2-9ea1-d91aa4b3b1ac
Find recent hotspots within your state by running a query to calculate the 7-day rolling average of new cases by capita in each county: https://data.world/associatedpress/johns-hopkins-coronavirus-case-tracker/workspace/query?queryid=b566f1db-3231-40fe-8099-311909b7b687&showTemplatePreview=true
Join county-level case data to an earlier dataset released by AP on local hospital capacity here. To find out more about the hospital capacity dataset, see the full details.
Pull the 100 counties with the highest per-capita confirmed cases here
Rank all the counties by the highest per-capita rate of new cases in the past 7 days here. Be aware that because this ranks per-capita caseloads, very small counties may rise to the very top, so take into account raw caseload figures as well.
The AP has designed an interactive map to track COVID-19 cases reported by Johns Hopkins.
@(https://datawrapper.dwcdn.net/nRyaf/15/)
<iframe title="USA counties (2018) choropleth map Mapping COVID-19 cases by county" aria-describedby="" id="datawrapper-chart-nRyaf" src="https://datawrapper.dwcdn.net/nRyaf/10/" scrolling="no" frameborder="0" style="width: 0; min-width: 100% !important;" height="400"></iframe><script type="text/javascript">(function() {'use strict';window.addEventListener('message', function(event) {if (typeof event.data['datawrapper-height'] !== 'undefined') {for (var chartId in event.data['datawrapper-height']) {var iframe = document.getElementById('datawrapper-chart-' + chartId) || document.querySelector("iframe[src*='" + chartId + "']");if (!iframe) {continue;}iframe.style.height = event.data['datawrapper-height'][chartId] + 'px';}}});})();</script>
Johns Hopkins timeseries data - Johns Hopkins pulls data regularly to update their dashboard. Once a day, around 8pm EDT, Johns Hopkins adds the counts for all areas they cover to the timeseries file. These counts are snapshots of the latest cumulative counts provided by the source on that day. This can lead to inconsistencies if a source updates their historical data for accuracy, either increasing or decreasing the latest cumulative count. - Johns Hopkins periodically edits their historical timeseries data for accuracy. They provide a file documenting all errors in their timeseries files that they have identified and fixed here
This data should be credited to Johns Hopkins University COVID-19 tracking project
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Location of death by urbanization level and race with number of deaths and percentage in parentheses. NaN = missing or unidentified data due to less than 20 sample size.
On March 10, 2023, the Johns Hopkins Coronavirus Resource Center ceased its collecting and reporting of global COVID-19 data. For updated cases, deaths, and vaccine data please visit: World Health Organization (WHO)For more information, visit the Johns Hopkins Coronavirus Resource Center.COVID-19 Trends MethodologyOur goal is to analyze and present daily updates in the form of recent trends within countries, states, or counties during the COVID-19 global pandemic. The data we are analyzing is taken directly from the Johns Hopkins University Coronavirus COVID-19 Global Cases Dashboard, though we expect to be one day behind the dashboard’s live feeds to allow for quality assurance of the data.DOI: https://doi.org/10.6084/m9.figshare.125529863/7/2022 - Adjusted the rate of active cases calculation in the U.S. to reflect the rates of serious and severe cases due nearly completely dominant Omicron variant.6/24/2020 - Expanded Case Rates discussion to include fix on 6/23 for calculating active cases.6/22/2020 - Added Executive Summary and Subsequent Outbreaks sectionsRevisions on 6/10/2020 based on updated CDC reporting. This affects the estimate of active cases by revising the average duration of cases with hospital stays downward from 30 days to 25 days. The result shifted 76 U.S. counties out of Epidemic to Spreading trend and no change for national level trends.Methodology update on 6/2/2020: This sets the length of the tail of new cases to 6 to a maximum of 14 days, rather than 21 days as determined by the last 1/3 of cases. This was done to align trends and criteria for them with U.S. CDC guidance. The impact is areas transition into Controlled trend sooner for not bearing the burden of new case 15-21 days earlier.Correction on 6/1/2020Discussion of our assertion of an abundance of caution in assigning trends in rural counties added 5/7/2020. Revisions added on 4/30/2020 are highlighted.Revisions added on 4/23/2020 are highlighted.Executive SummaryCOVID-19 Trends is a methodology for characterizing the current trend for places during the COVID-19 global pandemic. Each day we assign one of five trends: Emergent, Spreading, Epidemic, Controlled, or End Stage to geographic areas to geographic areas based on the number of new cases, the number of active cases, the total population, and an algorithm (described below) that contextualize the most recent fourteen days with the overall COVID-19 case history. Currently we analyze the countries of the world and the U.S. Counties. The purpose is to give policymakers, citizens, and analysts a fact-based data driven sense for the direction each place is currently going. When a place has the initial cases, they are assigned Emergent, and if that place controls the rate of new cases, they can move directly to Controlled, and even to End Stage in a short time. However, if the reporting or measures to curtail spread are not adequate and significant numbers of new cases continue, they are assigned to Spreading, and in cases where the spread is clearly uncontrolled, Epidemic trend.We analyze the data reported by Johns Hopkins University to produce the trends, and we report the rates of cases, spikes of new cases, the number of days since the last reported case, and number of deaths. We also make adjustments to the assignments based on population so rural areas are not assigned trends based solely on case rates, which can be quite high relative to local populations.Two key factors are not consistently known or available and should be taken into consideration with the assigned trend. First is the amount of resources, e.g., hospital beds, physicians, etc.that are currently available in each area. Second is the number of recoveries, which are often not tested or reported. On the latter, we provide a probable number of active cases based on CDC guidance for the typical duration of mild to severe cases.Reasons for undertaking this work in March of 2020:The popular online maps and dashboards show counts of confirmed cases, deaths, and recoveries by country or administrative sub-region. Comparing the counts of one country to another can only provide a basis for comparison during the initial stages of the outbreak when counts were low and the number of local outbreaks in each country was low. By late March 2020, countries with small populations were being left out of the mainstream news because it was not easy to recognize they had high per capita rates of cases (Switzerland, Luxembourg, Iceland, etc.). Additionally, comparing countries that have had confirmed COVID-19 cases for high numbers of days to countries where the outbreak occurred recently is also a poor basis for comparison.The graphs of confirmed cases and daily increases in cases were fit into a standard size rectangle, though the Y-axis for one country had a maximum value of 50, and for another country 100,000, which potentially misled people interpreting the slope of the curve. Such misleading circumstances affected comparing large population countries to small population counties or countries with low numbers of cases to China which had a large count of cases in the early part of the outbreak. These challenges for interpreting and comparing these graphs represent work each reader must do based on their experience and ability. Thus, we felt it would be a service to attempt to automate the thought process experts would use when visually analyzing these graphs, particularly the most recent tail of the graph, and provide readers with an a resulting synthesis to characterize the state of the pandemic in that country, state, or county.The lack of reliable data for confirmed recoveries and therefore active cases. Merely subtracting deaths from total cases to arrive at this figure progressively loses accuracy after two weeks. The reason is 81% of cases recover after experiencing mild symptoms in 10 to 14 days. Severe cases are 14% and last 15-30 days (based on average days with symptoms of 11 when admitted to hospital plus 12 days median stay, and plus of one week to include a full range of severely affected people who recover). Critical cases are 5% and last 31-56 days. Sources:U.S. CDC. April 3, 2020 Interim Clinical Guidance for Management of Patients with Confirmed Coronavirus Disease (COVID-19). Accessed online. Initial older guidance was also obtained online. Additionally, many people who recover may not be tested, and many who are, may not be tracked due to privacy laws. Thus, the formula used to compute an estimate of active cases is: Active Cases = 100% of new cases in past 14 days + 19% from past 15-25 days + 5% from past 26-49 days - total deaths. On 3/17/2022, the U.S. calculation was adjusted to: Active Cases = 100% of new cases in past 14 days + 6% from past 15-25 days + 3% from past 26-49 days - total deaths. Sources: https://www.cdc.gov/mmwr/volumes/71/wr/mm7104e4.htm https://covid.cdc.gov/covid-data-tracker/#variant-proportions If a new variant arrives and appears to cause higher rates of serious cases, we will roll back this adjustment. We’ve never been inside a pandemic with the ability to learn of new cases as they are confirmed anywhere in the world. After reviewing epidemiological and pandemic scientific literature, three needs arose. We need to specify which portions of the pandemic lifecycle this map cover. The World Health Organization (WHO) specifies six phases. The source data for this map begins just after the beginning of Phase 5: human to human spread and encompasses Phase 6: pandemic phase. Phase six is only characterized in terms of pre- and post-peak. However, these two phases are after-the-fact analyses and cannot ascertained during the event. Instead, we describe (below) a series of five trends for Phase 6 of the COVID-19 pandemic.Choosing terms to describe the five trends was informed by the scientific literature, particularly the use of epidemic, which signifies uncontrolled spread. The five trends are: Emergent, Spreading, Epidemic, Controlled, and End Stage. Not every locale will experience all five, but all will experience at least three: emergent, controlled, and end stage.This layer presents the current trends for the COVID-19 pandemic by country (or appropriate level). There are five trends:Emergent: Early stages of outbreak. Spreading: Early stages and depending on an administrative area’s capacity, this may represent a manageable rate of spread. Epidemic: Uncontrolled spread. Controlled: Very low levels of new casesEnd Stage: No New cases These trends can be applied at several levels of administration: Local: Ex., City, District or County – a.k.a. Admin level 2State: Ex., State or Province – a.k.a. Admin level 1National: Country – a.k.a. Admin level 0Recommend that at least 100,000 persons be represented by a unit; granted this may not be possible, and then the case rate per 100,000 will become more important.Key Concepts and Basis for Methodology: 10 Total Cases minimum threshold: Empirically, there must be enough cases to constitute an outbreak. Ideally, this would be 5.0 per 100,000, but not every area has a population of 100,000 or more. Ten, or fewer, cases are also relatively less difficult to track and trace to sources. 21 Days of Cases minimum threshold: Empirically based on COVID-19 and would need to be adjusted for any other event. 21 days is also the minimum threshold for analyzing the “tail” of the new cases curve, providing seven cases as the basis for a likely trend (note that 21 days in the tail is preferred). This is the minimum needed to encompass the onset and duration of a normal case (5-7 days plus 10-14 days). Specifically, a median of 5.1 days incubation time, and 11.2 days for 97.5% of cases to incubate. This is also driven by pressure to understand trends and could easily be adjusted to 28 days. Source
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Total mortality distribution categorized by urbanization, race, sex, and age 1999-2020.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
BackgroundCanine rabies is a neglected disease causing 55,000 human deaths worldwide per year, and 99% of all cases are transmitted by dog bites. In N'Djaména, the capital of Chad, rabies is endemic with an incidence of 1.71/1,000 dogs (95% C.I. 1.45–1.98). The gold standard of rabies diagnosis is the direct immunofluorescent antibody (DFA) test, requiring a fluorescent microscope. The Centers for Disease Control and Prevention (CDC, Atlanta, United States of America) developed a histochemical test using low-cost light microscopy, the direct rapid immunohistochemical test (dRIT).Methodology/Principal FindingsWe evaluated the dRIT in the Chadian National Veterinary Laboratory in N'Djaména by testing 35 fresh samples parallel with both the DFA and dRIT. Additional retests (n = 68 in Chad, n = 74 at CDC) by DFA and dRIT of stored samples enhanced the power of the evaluation. All samples were from dogs, cats, and in one case from a bat. The dRIT performed very well compared to DFA. We found a 100% agreement of the dRIT and DFA in fresh samples (n = 35). Results of retesting at CDC and in Chad depended on the condition of samples. When the sample was in good condition (fresh brain tissue), we found simple Cohen's kappa coefficient related to the DFA diagnostic results in fresh tissue of 0.87 (95% C.I. 0.63–1) up to 1. For poor quality samples, the kappa values were between 0.13 (95% C.I. −0.15–0.40) and 0.48 (95% C.I. 0.14–0.82). For samples stored in glycerol, dRIT results were more likely to agree with DFA testing in fresh samples than the DFA retesting.Conclusion/SignificanceThe dRIT is as reliable a diagnostic method as the gold standard (DFA) for fresh samples. It has an advantage of requiring only light microscopy, which is 10 times less expensive than a fluorescence microscope. Reduced cost suggests high potential for making rabies diagnosis available in other cities and rural areas of Africa for large populations for which a capacity for diagnosis will contribute to rabies control.
COVID-19 Trends MethodologyOur goal is to analyze and present daily updates in the form of recent trends within countries, states, or counties during the COVID-19 global pandemic. The data we are analyzing is taken directly from the Johns Hopkins University Coronavirus COVID-19 Global Cases Dashboard, though we expect to be one day behind the dashboard’s live feeds to allow for quality assurance of the data.Revisions added on 4/23/2020 are highlighted.Revisions added on 4/30/2020 are highlighted.Discussion of our assertion of an abundance of caution in assigning trends in rural counties added 5/7/2020. Correction on 6/1/2020Methodology update on 6/2/2020: This sets the length of the tail of new cases to 6 to a maximum of 14 days, rather than 21 days as determined by the last 1/3 of cases. This was done to align trends and criteria for them with U.S. CDC guidance. The impact is areas transition into Controlled trend sooner for not bearing the burden of new case 15-21 days earlier.Reasons for undertaking this work:The popular online maps and dashboards show counts of confirmed cases, deaths, and recoveries by country or administrative sub-region. Comparing the counts of one country to another can only provide a basis for comparison during the initial stages of the outbreak when counts were low and the number of local outbreaks in each country was low. By late March 2020, countries with small populations were being left out of the mainstream news because it was not easy to recognize they had high per capita rates of cases (Switzerland, Luxembourg, Iceland, etc.). Additionally, comparing countries that have had confirmed COVID-19 cases for high numbers of days to countries where the outbreak occurred recently is also a poor basis for comparison.The graphs of confirmed cases and daily increases in cases were fit into a standard size rectangle, though the Y-axis for one country had a maximum value of 50, and for another country 100,000, which potentially misled people interpreting the slope of the curve. Such misleading circumstances affected comparing large population countries to small population counties or countries with low numbers of cases to China which had a large count of cases in the early part of the outbreak. These challenges for interpreting and comparing these graphs represent work each reader must do based on their experience and ability. Thus, we felt it would be a service to attempt to automate the thought process experts would use when visually analyzing these graphs, particularly the most recent tail of the graph, and provide readers with an a resulting synthesis to characterize the state of the pandemic in that country, state, or county.The lack of reliable data for confirmed recoveries and therefore active cases. Merely subtracting deaths from total cases to arrive at this figure progressively loses accuracy after two weeks. The reason is 81% of cases recover after experiencing mild symptoms in 10 to 14 days. Severe cases are 14% and last 15-30 days (based on average days with symptoms of 11 when admitted to hospital plus 12 days median stay, and plus of one week to include a full range of severely affected people who recover). Critical cases are 5% and last 31-56 days. Sources:U.S. CDC. April 3, 2020 Interim Clinical Guidance for Management of Patients with Confirmed Coronavirus Disease (COVID-19). Accessed online. Initial older guidance was also obtained online. Additionally, many people who recover may not be tested, and many who are, may not be tracked due to privacy laws. Thus, the formula used to compute an estimate of active cases is: Active Cases = 100% of new cases in past 14 days + 19% from past 15-30 days + 5% from past 31-56 days - total deaths.We’ve never been inside a pandemic with the ability to learn of new cases as they are confirmed anywhere in the world. After reviewing epidemiological and pandemic scientific literature, three needs arose. We need to specify which portions of the pandemic lifecycle this map cover. The World Health Organization (WHO) specifies six phases. The source data for this map begins just after the beginning of Phase 5: human to human spread and encompasses Phase 6: pandemic phase. Phase six is only characterized in terms of pre- and post-peak. However, these two phases are after-the-fact analyses and cannot ascertained during the event. Instead, we describe (below) a series of five trends for Phase 6 of the COVID-19 pandemic.Choosing terms to describe the five trends was informed by the scientific literature, particularly the use of epidemic, which signifies uncontrolled spread. The five trends are: Emergent, Spreading, Epidemic, Controlled, and End Stage. Not every locale will experience all five, but all will experience at least three: emergent, controlled, and end stage.This layer presents the current trends for the COVID-19 pandemic by country (or appropriate level). There are five trends:Emergent: Early stages of outbreak. Spreading: Early stages and depending on an administrative area’s capacity, this may represent a manageable rate of spread. Epidemic: Uncontrolled spread. Controlled: Very low levels of new casesEnd Stage: No New cases These trends can be applied at several levels of administration: Local: Ex., City, District or County – a.k.a. Admin level 2State: Ex., State or Province – a.k.a. Admin level 1National: Country – a.k.a. Admin level 0Recommend that at least 100,000 persons be represented by a unit; granted this may not be possible, and then the case rate per 100,000 will become more important.Key Concepts and Basis for Methodology: 10 Total Cases minimum threshold: Empirically, there must be enough cases to constitute an outbreak. Ideally, this would be 5.0 per 100,000, but not every area has a population of 100,000 or more. Ten, or fewer, cases are also relatively less difficult to track and trace to sources. 21 Days of Cases minimum threshold: Empirically based on COVID-19 and would need to be adjusted for any other event. 21 days is also the minimum threshold for analyzing the “tail” of the new cases curve, providing seven cases as the basis for a likely trend (note that 21 days in the tail is preferred). This is the minimum needed to encompass the onset and duration of a normal case (5-7 days plus 10-14 days). Specifically, a median of 5.1 days incubation time, and 11.2 days for 97.5% of cases to incubate. This is also driven by pressure to understand trends and could easily be adjusted to 28 days. Source used as basis:Stephen A. Lauer, MS, PhD *; Kyra H. Grantz, BA *; Qifang Bi, MHS; Forrest K. Jones, MPH; Qulu Zheng, MHS; Hannah R. Meredith, PhD; Andrew S. Azman, PhD; Nicholas G. Reich, PhD; Justin Lessler, PhD. 2020. The Incubation Period of Coronavirus Disease 2019 (COVID-19) From Publicly Reported Confirmed Cases: Estimation and Application. Annals of Internal Medicine DOI: 10.7326/M20-0504.New Cases per Day (NCD) = Measures the daily spread of COVID-19. This is the basis for all rates. Back-casting revisions: In the Johns Hopkins’ data, the structure is to provide the cumulative number of cases per day, which presumes an ever-increasing sequence of numbers, e.g., 0,0,1,1,2,5,7,7,7, etc. However, revisions do occur and would look like, 0,0,1,1,2,5,7,7,6. To accommodate this, we revised the lists to eliminate decreases, which make this list look like, 0,0,1,1,2,5,6,6,6.Reporting Interval: In the early weeks, Johns Hopkins' data provided reporting every day regardless of change. In late April, this changed allowing for days to be skipped if no new data was available. The day was still included, but the value of total cases was set to Null. The processing therefore was updated to include tracking of the spacing between intervals with valid values.100 News Cases in a day as a spike threshold: Empirically, this is based on COVID-19’s rate of spread, or r0 of ~2.5, which indicates each case will infect between two and three other people. There is a point at which each administrative area’s capacity will not have the resources to trace and account for all contacts of each patient. Thus, this is an indicator of uncontrolled or epidemic trend. Spiking activity in combination with the rate of new cases is the basis for determining whether an area has a spreading or epidemic trend (see below). Source used as basis:World Health Organization (WHO). 16-24 Feb 2020. Report of the WHO-China Joint Mission on Coronavirus Disease 2019 (COVID-19). Obtained online.Mean of Recent Tail of NCD = Empirical, and a COVID-19-specific basis for establishing a recent trend. The recent mean of NCD is taken from the most recent fourteen days. A minimum of 21 days of cases is required for analysis but cannot be considered reliable. Thus, a preference of 42 days of cases ensures much higher reliability. This analysis is not explanatory and thus, merely represents a likely trend. The tail is analyzed for the following:Most recent 2 days: In terms of likelihood, this does not mean much, but can indicate a reason for hope and a basis to share positive change that is not yet a trend. There are two worthwhile indicators:Last 2 days count of new cases is less than any in either the past five or 14 days. Past 2 days has only one or fewer new cases – this is an extremely positive outcome if the rate of testing has continued at the same rate as the previous 5 days or 14 days. Most recent 5 days: In terms of likelihood, this is more meaningful, as it does represent at short-term trend. There are five worthwhile indicators:Past five days is greater than past 2 days and past 14 days indicates the potential of the past 2 days being an aberration. Past five days is greater than past 14 days and less than past 2 days indicates slight positive trend, but likely still within peak trend time frame.Past five days is less than the past 14 days. This means a downward trend. This would be an
The 2005 Cambodia Demographic and Health Survey (CDHS) uses the same methodology as its predecessor, the 2000 Cambodia Demographic and Health Survey, allowing policymakers to use the two surveys to assess trends over time.
The primary objective of the CDHS is to provide the Ministry of Health, Ministry of Planning (MOP), and other relevant institutions and users with updated and reliable data on infant and child mortality, fertility preferences, family planning behavior, maternal mortality, utilization of maternal and child health services, health expenditures, women’s status, domestic violence, and knowledge and behavior regarding HIV/AIDS and other sexually transmitted infections. This information contributes to policy decisions, planning, monitoring, and program evaluation for the development of Cambodia, at both national- and local-government levels.
The long-term objectives of the survey are to technically strengthen the capacity of the National Institute of Public Health (NIPH), Ministry of Health, and the National Institute of Statistics (NIS) of MOP for planning, conducting, and analyzing the results of further surveys.
The 2005 DHS survey was conducted by the National Institute of Public Health (NIPH), the Ministry of Health, and the National Institute of Statistics of the Ministry of Planning. The CDHS executive committee and technical committee were established to oversee all technical aspects of implementation. They consisted of representatives from the Ministry of Health, the National Institute of Public Health, Department of Planning and Health Information, the Ministry of Planning, the National Institute of Statistics, the U.S. Agency for International Development (USAID), Department for International Development (DFID), the United Nations Population Fund (UNFPA), and the United Nations Children’s Fund (UNICEF). Funding for the survey came from USAID, the Asian Development Bank (ADB) (under the Health Sector Support Project HSSP, using a grant from the United Kingdom, DFID), UNFPA, UNICEF, and the Centers for Disease Control/Global AIDS Program (CDC/GAP). Technical assistance was provided by ORC Macro.
National
Sample survey data
SAMPLE DESIGN
Creation of the 2005 CDHS sample was based on the objective of collecting a nationally representative sample of completed interviews with women and men between the ages of 15 and 49. To achieve a balance between the ability to provide estimates for all 24 provinces in the country and limiting the sample size, 19 sampling domains were defined, 14 of which correspond to individual provinces and 5 of which correspond to grouped provinces. - Fourteen individual provinces: Banteay Mean Chey, Kampong Cham, Kampong Chhnang, Kampong Speu, Kampong Thom, Kandal, Kratie, Phnom Penh, Prey Veng, Pursat, Siem Reap, Svay Rieng, Takeo, and Otdar Mean Chey; - Five groups of provinces: Battambang and Krong Pailin, Kampot and Krong Kep, Krong Preah Sihanouk and Kaoh Kong, Preah Vihear and Steung Treng, Mondol Kiri, and Rattanak Kiri.
The sample of households was allocated to the sampling domains in such a way that estimates of indicators can be produced with known precision for each of the 19 sampling domains, for all of Cambodia combined, and separately for urban and rural areas of the country.
The sampling frame used for 2005 CDHS is the complete list of all villages enumerated in the 1998 Cambodia General Population Census (GPC) plus 166 villages which were not enumerated during the 1998 GPC, provided by the National Institute of Statistics (NIS). It includes the entire country and consists of 13,505 villages. The GPC also created maps that delimited the boundaries of every village. Of the total villages, 1,312 villages are designated as urban and 12,193 villages are designated as rural, with an average household size of 161 households per village.
The survey is based on a stratified sample selected in two stages. Stratification was achieved by separating every reporting domain into urban and rural areas. Thus the 19 domains were stratified into a total of 38 sampling strata. Samples were selected independently in every stratum, by a two stage selection. Implicit stratifications were achieved at each of the lower geographical or administrative levels by sorting the sampling frame according to the geographical/administrative order and by using a probability proportional to size selection at the first stage of selection.
In the first stage, 557 villages were selected with probability proportional to village size. Village size is the number of households residing in the village. Some of the largest villages were further divided into enumeration areas (EA). Thus, the 557 CDHS clusters are either a village or an EA. A listing of all the households was carried out in each of the 557 selected villages during the months of February-April 2005. Listing teams also drew fresh maps delineating village boundaries and identifying all households. These maps and lists were used by field teams during data collection.
The household listings provided the frame from which the selection of household was drawn in the second stage. To ensure a sample size large enough to calculate reliable estimates for all the desired study domains, it was necessary to control the total number of households drawn. This was done by selecting 24 households in every urban EA, and 28 households in every rural EA. The resulting oversampling of small areas and urban areas is corrected by applying sampling weights to the data, which ensures the validity of the sample for all 38 strata (urban/rural, and 19 domains).
All women age 15-49 years who were either usual residents of the selected households or visitors present in the household on the night before the survey were eligible to be interviewed. In addition, in a subsample of every second household selected for the survey, all men age 15-49 were eligible to be interviewed (if they were either usual residents of the selected households or visitors present in the household on the night before the survey). The minimum sample size is larger for women than men because complex indicators (such as total fertility and infant and child mortality rates) require larger sample sizes to achieve sampling errors of reasonable size, and these data come from interviews with women.
In the 50 percent subsample, all men and women eligible for the individual interview were also eligible for HIV testing. In addition, in this subsample of households all women eligible for interview and all children under the age of five were eligible for anemia testing. These same women and children were also eligible for height and weight measurement to determine their nutritional status. Women in this same subsample were also eligible to be interviewed with the cause of death module, applicable to women with a child born since January 2002.
The 50 percent subsample not eligible for the man interview was further divided into half, resulting in one-quarter subsamples. In one-quarter subsample all women age 15-49 were eligible for the woman's status module in addition to the main interview. In this same one-quarter subsample, one woman per household was eligible for the domestic violence module. In the other one-quarter subsample, women were not eligible for the woman's status module, nor the domestic violence module.
NOTE: See detailed description of the sample design in APPENDIX A of tthe survey report.
Face-to-face [f2f]
Three questionnaires were used: the Household Questionnaire, Woman Questionnaire, and Man Questionnaire. The content of these questionnaires was based on model questionnaires developed by the MEASURE DHS project. Technical meetings between experts and representatives of the Cambodian government and national and international organizations were held to discuss the content of the questionnaires. Inputs generated by these meetings were used to modify the model questionnaires to reflect the needs of users and relevant population, family planning, and health issues in Cambodia. Final questionnaires were translated from English to Khmer and a great deal of refinement to the translation was accomplished during the pretest of the questionnaires.
The Household Questionnaire served multiple purposes: - It was used to list all of the usual members and visitors in the selected households and was the vehicle for identifying women and men who were eligible for the individual interview. - It collected basic information on the characteristics of each person listed, including age, sex, education, and relationship to the head of the household. - It collected information on characteristics of the household’s dwelling unit, ownership of various durable goods, ownership and use of mosquito nets, and testing of salt for iodine content. - It collected anthropometric (height and weight) measurements and hemoglobin levels. - It was used to register people eligible for collection of samples for later HIV testing. - It had a module on recent illness or death. - It had a module on utilization of health services.
The Women’s Questionnaire covered a wide variety of topics divided into 13 sections: - Respondent Background - Reproduction, including an abortion module - Family Planning - Pregnancy Postnatal Care and Children’s Nutrition - Immunization Health and Women’s Nutrition - Cause of Death of Children (also known as Verbal Autopsy) - Marriage and Sexual Activity - Fertility Preferences - Husband’s Background and Woman’s Work - HIV AIDS and Other
During the early 1990s Romania was faced with the reproductive health consequences of an aberrant pronatalist policy enforced for several decades by the Ceausescu's regime. Health policy makers tried to rapidly respond to these consequences by adopting new health strategies to reduce maternal and infant mortality. These strategies included development of the first national family planning program; introduction of new technologies in neonatal and maternal health services; implementation of active measurements to control the HIV/AIDS epidemic; and development of social programs for abandoned, institutionalized, and drug-using children and for domestic violence.
Such a rapidly changing array of critical reproductive health issues could not have been documented and addressed with only the help of vital records. More information was needed to assess the reproductive health status of the Romanian population during a period of rapid change in health care that influenced the health of women and children.
In 1993, the Romanian Ministry of Health, with technical assistance provided by the Division of Reproductive Health of the Centers for Disease Control and Prevention (DRH/CDC), conducted the first national population-based survey of women's reproductive health (93RRHS). The survey was designed to provide the Ministry of Health, international agencies, and nongovernmental organizations active in women's and children's health with essential information on fertility, women's reproductive practices, maternal care, maternal and child mortality, health behaviors, and attitudes toward selected reproductive health issues. The 93RRHS was instrumental in developing, evaluating, and fine-tuning the national family planning program and other reproductive health policies.
In 1996, a representative sample survey of women and men aged 15-24 was implemented to document young adult's sex education, attitudes, sexual behavior and use of contraception. Such survey had never before been carried out in Eastern Europe. Survey results were used to plan effective information campaigns, policies and programs targeting young people, and to monitor and evaluate the impact of programs already in place.
In 1999, a new nationwide reproductive health survey was designed and implemented in Romania (99RRHS) using the same methodology to allow for the study of reproductive health trends among the women aged 15-44 and to document the reproductive health of men aged 15-49. The surveys employed two separate probability samples to allow independent estimates for males and females. The survey's Final Report improves the already impressive contribution of the previous two studies because: a) documents reproductive health aspects among both women and men of reproductive age (men were selected from different households than women); and b) by oversampling three target judet (Constanta, Iasi and Cluj) documents the impact of region-wide interventions, implemented with USAID support, that consists of the establishment of modern women's health clinics, training of health professionals, development of IEC messages, social marketing, and provision of highquality contraceptive supplies.
In conclusion, the results of these large nationwide cross-sectional studies implemented in 1993 (sample size of 4,861 women aged 15-44), 1996 (sample size of 2025 women and 2047 men aged 15-24), and 1999 (sample size of 6,888 women aged 15-44 and 2,434 men aged 15-49), allow for generalizing the results to the entire reproductive age population of Romania. Although the surveys did not interview the same households, by applying similar questionnaires, the same sampling and field work methodology, they allow for a) a longitudinal examination of reproductive health issues among women, b) a detailed image of specific aspects of reproductive and sexual behaviors among men and c) a programmatic evaluation of reproductive health services in three regions.
The 99RRHS was designed to collect information from a representative sample of women and men of reproductive age throughout Romania.
Sample survey data [ssd]
The 99RRHS was designed to collect information from a representative sample of women and men of reproductive age throughout Romania. Respondents were selected from the universe of all females aged 15-44 years and all males aged 15-49 years, regardless of marital status, who were living in Romania when the survey was conducted. The desired sample for females was 6,500, including an oversample of women in the three US AID priority judets (Cluj, Constanta, and Iasi).
The desired sample size for males was 2,500. The female and male samples were selected independently.
The survey used a three-stage sampling design, which allows independent estimates for the female and male samples. An updated master sampling frame (EMZOT), based on the 1992 census enumeration areas, was used as the sampling frame (National Commission for Statistics, 1996). The EMZOT master sample represents 3% of the population in each judet. In the female sample, the US AID priority judets were oversampled in both urban and rural areas to allow for independent estimates with adequate precision for women's health behaviors in these judets.
Except for the three oversampled judets (in which all available census sectors in the sample were retained), the first stage of the sample design was a selection of census sectors with probability proportional to the number of households recorded in the EMZOT. This step was accomplished by using a systematic sample with a random start for the female sample. A 50% subsample of the census sectors selected in the female sample (not including the oversample in the priority judets) constituted the first stage of the male sample. Thus, the first-stage selection included 317 sectors for the female sample and 128 sectors for the male sample. In the second stage of sampling, clusters of households were randomly selected in each census sector chosen in the first stage (separate households were selected for the female and male samples). Finally, in each of the households in the female sample, one woman aged 15—44 years was selected at random for interviewing and in the male sample one man aged 15-49 years was randomly selected in each household.
Because only one woman was selected from each household with women of reproductive age, and one male was selected from households with men of reproductive age, all results have been weighted to compensate for the fact that some households included more than one eligible female or male respondent. Survey results were also weighted to adjust for oversampling of households in the three US AID priority judets, and two more weights were added to adjust for non-response and for urban-rural distribution of the population.
Cluster size was determined based on the number of households required to obtain an average of 20 completed interviews per cluster. The number of households in each cluster took into account estimates of unoccupied households, average number of women aged 15-44 per household (men aged 15-49 for the male sample), the interview of only one respondent per household, and an estimated response rate of 90% in urban areas and 92% in rural areas for women and of 85% overall for men. Cluster size was determined to be 51 households in urban areas and 59 households in rural areas for the female sample and 49 and 55 households, respectively, for the male sample.
Face-to-face [f2f]
Of the 17,349 households selected in the female sample and 6,310 households selected in the male sample, 7,645 and 2,812 included at least one eligible respondent (a woman aged 15-44 or a man aged 15-49). Of these, 6,888 women and 2,434 men were successfully interviewed, yielding response rates of 90% and 87%, respectively. As many as four visits were placed to each household with eligible respondents who were not at home during the initial household approach.
Almost all respondents who were selected to participate and who could be reached agreed to be interviewed. Only 2% of respondents (regardless of gender) refused to be interviewed, and 7% of women and 11% of men could not be located. Response rates were not significantly different by residence, except for Bucharest, where the participation rate was slightly lower. Even though the overall response rate was similar in urban and rural areas, eligible respondents in urban areas were somewhat more likely to refuse to be interviewed; in rural areas eligible respondents were more likely to not be found at home.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
IntroductionSociodemographic disparities in genitourinary cancer-related mortality have been insufficiently studied, particularly across multiple cancer types. This study aimed to investigate gender, racial, and geographic disparities in mortality rates for the most common genitourinary cancers in the United States.MethodsMortality data for prostate, bladder, kidney, and testicular cancers were obtained from the Centers for Disease Control and Prevention (CDC) WONDER database between 1999 and 2020. Age-adjusted mortality rates (AAMRs) were analyzed by year, gender, race, urban–rural status, and geographic region using a significance level of p < 0.05.ResultsOverall, AAMRs for prostate, bladder, and kidney cancer declined significantly, while testicular cancer-related mortality remained stable. Bladder and kidney cancer AAMRs were 3–4 times higher in males than females. Prostate cancer mortality was highest in black individuals/African Americans and began increasing after 2015. Bladder cancer mortality decreased significantly in White individuals, Black individuals, African Americans, and Asians/Pacific Islanders but remained stable in American Indian/Alaska Natives. Kidney cancer-related mortality was highest in White individuals but declined significantly in other races. Testicular cancer mortality increased significantly in White individuals but remained stable in Black individuals and African Americans. Genitourinary cancer mortality decreased in metropolitan areas but either increased (bladder and testicular cancer) or remained stable (kidney cancer) in non-metropolitan areas. Prostate and kidney cancer mortality was highest in the Midwest, bladder cancer in the South, and testicular cancer in the West.DiscussionSignificant sociodemographic disparities exist in the mortality trends of genitourinary cancers in the United States. These findings highlight the need for targeted interventions and further research to address these disparities and improve outcomes for all populations affected by genitourinary cancers.
The 2014 Kenya Demographic and Health Survey (KDHS) provides information to help monitor and evaluate population and health status in Kenya. The survey, which follows up KDHS surveys conducted in 1989, 1993, 1998, 2003, and 2008-09, is of special importance for several reasons. New indicators not collected in previous KDHS surveys, such as noncommunicable diseases, fistula, and men's experience of domestic violence, are included. Also, it is the first national survey to provide estimates for demographic and health indicators at the county level. Following adoption of a constitution in Kenya in 2010 and devolution of administrative powers to the counties, the new 2014 KDHS data should be valuable to managers and planners. The 2014 KDHS has specifically collected data to estimate fertility, to assess childhood, maternal, and adult mortality, to measure changes in fertility and contraceptive prevalence, to examine basic indicators of maternal and child health, to estimate nutritional status of women and children, to describe patterns of knowledge and behaviour related to the transmission of HIV and other sexually transmitted infections, and to ascertain the extent and pattern of domestic violence and female genital cutting. Unlike the 2003 and 2008-09 KDHS surveys, this survey did not include HIV and AIDS testing. HIV prevalence estimates are available from the 2012 Kenya AIDS Indicator Survey (KAIS), completed prior to the 2014 KDHS. Results from the 2014 KDHS show a continued decline in the total fertility rate (TFR). Fertility decreased from 4.9 births per woman in 2003 to 4.6 in 2008-09 and further to 3.9 in 2014, a one-child decline over the past 10 years and the lowest TFR ever recorded in Kenya. This is corroborated by the marked increase in the contraceptive prevalence rate (CPR) from 46 percent in 2008-09 to 58 percent in the current survey. The decline in fertility accompanies a marked decline in infant and child mortality. All early childhood mortality rates have declined between the 2003 and 2014 KDHS surveys. Total under-5 mortality declined from 115 deaths per 1,000 live births in the 2003 KDHS to 52 deaths per 1,000 live births in the 2014 KDHS. The maternal mortality ratio is 362 maternal deaths per 100,000 live births for the seven-year period preceding the survey; however, this is not statistically different from the ratios reported in the 2003 and 2008-09 KDHS surveys and does not indicate any decline over time. The proportion of mothers who reported receiving antenatal care from a skilled health provider increased from 88 percent to 96 percent between 2003 and 2014. The percentage of births attended by a skilled provider and the percentage of births occurring in health facilities each increased by about 20 percentage points between 2003 and 2014. The percentage of children age 12-23 months who have received all basic vaccines increased slightly from the 77 percent observed in the 2008-09 KDHS to 79 percent in 2014. Six in ten households (59 percent) own at least one insecticide-treated net, and 48 percent of Kenyans have access to one. In malaria endemic areas, 39 percent of women received the recommended dosage of intermittent preventive treatment for malaria during pregnancy. Awareness of AIDS is universal in Kenya; however, only 56 percent of women and 66 percent of men have comprehensive knowledge about HIV and AIDS prevention and transmission. The 2014 KDHS was conducted as a joint effort by many organisations. The Kenya National Bureau of Statistics (KNBS) served as the implementing agency by providing guidance in the overall survey planning, development of survey tools, training of personnel, data collection, processing, analysis, and dissemination of the results. The Bureau would like to acknowledge and appreciate the institutions and agencies for roles they played that resulted in the success of this exercise: Ministry of Health (MOH), National AIDS Control Council (NACC), National Council for Population and Development (NCPD), Kenya Medical Research Institute (KEMRI), Ministry of Labour, Social Security and Services, United States Agency for International Development (USAID/Kenya), ICF International, United Nations Fund for Population Activities (UNFPA), the United Kingdom Department for International Development (DfID), World Bank, Danish International Development Agency (DANIDA), United Nations Children's Fund (UNICEF), German Development Bank (KfW), World Food Programme (WFP), Clinton Health Access Initiative (CHAI), Micronutrient Initiative (MI), US Centers for Disease Control and Prevention (CDC), Japan International Cooperation Agency (JICA), Joint United Nations Programme on HIV/AIDS (UNAIDS), and the World Health Organization (WHO). The management of such a huge undertaking was made possible through the help of a signed memorandum of understanding (MoU) by all the partners and the creation of active Steering and Technical Committees.
County, Urban, Rural and National
Households
Sample survey data [ssd]
The sample for the 2014 KDHS was drawn from a master sampling frame, the Fifth National Sample Survey and Evaluation Programme (NASSEP V). This is a frame that the KNBS currently operates to conduct household-based surveys throughout Kenya. Development of the frame began in 2012, and it contains a total of 5,360 clusters split into four equal subsamples. These clusters were drawn with a stratified probability proportional to size sampling methodology from 96,251 enumeration areas (EAs) in the 2009 Kenya Population and Housing Census. The 2014 KDHS used two subsamples of the NASSEP V frame that were developed in 2013. Approximately half of the clusters in these two subsamples were updated between November 2013 and September 2014. Kenya is divided into 47 counties that serve as devolved units of administration, created in the new constitution of 2010. During the development of the NASSEP V, each of the 47 counties was stratified into urban and rural strata; since Nairobi county and Mombasa county have only urban areas, the resulting total was 92 sampling strata. The 2014 KDHS was designed to produce representative estimates for most of the survey indicators at the national level, for urban and rural areas separately, at the regional (former provincial1) level, and for selected indicators at the county level. In order to meet these objectives, the sample was designed to have 40,300 households from 1,612 clusters spread across the country, with 995 clusters in rural areas and 617 in urban areas. Samples were selected independently in each sampling stratum, using a two-stage sample design. In the first stage, the 1,612 EAs were selected with equal probability from the NASSEP V frame. The households from listing operations served as the sampling frame for the second stage of selection, in which 25 households were selected from each cluster. The interviewers visited only the preselected households, and no replacement of the preselected households was allowed during data collection. The Household Questionnaire and the Woman's Questionnaire were administered in all households, while the Man's Questionnaire was administered in every second household. Because of the non-proportional allocation to the sampling strata and the fixed sample size per cluster, the survey was not self-weighting. The resulting data have, therefore, been weighted to be representative at the national, regional, and county levels.
Not available
Face-to-face [f2f]
The 2014 KDHS used a household questionnaire, a questionnaire for women age 15-49, and a questionnaire for men age 15-54. These instruments were based on the model questionnaires developed for The DHS Program, the questionnaires used in the previous KDHS surveys, and the current information needs of Kenya. During the development of the questionnaires, input was sought from a variety of organisations that are expected to use the resulting data. A two-day workshop involving key stakeholders was held to discuss the questionnaire design. Producing county-level estimates requires collecting data from a large number of households within each county, resulting in a considerable increase in the sample size from 9,936 households in the 2008-09 KDHS to 40,300 households in 2014. A survey of this magnitude introduces concerns related to data quality and overall management. To address these concerns, reduce the length of fieldwork, and limit interviewer and respondent fatigue, a decision was made to not implement the full questionnaire in every household and, in so doing, to collect only priority indicators at the county level. Stakeholders generated a list of these priority indicators. Short household and woman's questionnaires were then designed based on the full questionnaires; the short questionnaires contain the subset of questions from the full questionnaires required to measure the priority indicators at the county level. Thus, a total of five questionnaires were used in the 2014 KDHS: (1) a full Household Questionnaire, (2) a short Household Questionnaire, (3) a full Woman's Questionnaire, (4) a short Woman's Questionnaire, and (5) a Man's Questionnaire. The 2014 KDHS sample was divided into halves. In one half, households were administered the full Household Questionnaire, the full Woman's Questionnaire, and the Man's Questionnaire. In the other half, households were administered the short Household Questionnaire and the short Woman's Questionnaire. Selection of these subsamples was done at the household level-within a cluster, one in every two
Not seeing a result you expected?
Learn how you can add new datasets to our index.
MMWR Surveillance Summary 66 (No. SS-1):1-8 found that nonmetropolitan areas have significant numbers of potentially excess deaths from the five leading causes of death. These figures accompany this report by presenting information on potentially excess deaths in nonmetropolitan and metropolitan areas at the state level. They also add additional years of data and options for selecting different age ranges and benchmarks. Potentially excess deaths are defined in MMWR Surveillance Summary 66(No. SS-1):1-8 as deaths that exceed the numbers that would be expected if the death rates of states with the lowest rates (benchmarks) occurred across all states. They are calculated by subtracting expected deaths for specific benchmarks from observed deaths. Not all potentially excess deaths can be prevented; some areas might have characteristics that predispose them to higher rates of death. However, many potentially excess deaths might represent deaths that could be prevented through improved public health programs that support healthier behaviors and neighborhoods or better access to health care services. Mortality data for U.S. residents come from the National Vital Statistics System. Estimates based on fewer than 10 observed deaths are not shown and shaded yellow on the map. Underlying cause of death is based on the International Classification of Diseases, 10th Revision (ICD-10) Heart disease (I00-I09, I11, I13, and I20–I51) Cancer (C00–C97) Unintentional injury (V01–X59 and Y85–Y86) Chronic lower respiratory disease (J40–J47) Stroke (I60–I69) Locality (nonmetropolitan vs. metropolitan) is based on the Office of Management and Budget’s 2013 county-based classification scheme. Benchmarks are based on the three states with the lowest age and cause-specific mortality rates. Potentially excess deaths for each state are calculated by subtracting deaths at the benchmark rates (expected deaths) from observed deaths. Users can explore three benchmarks: “2010 Fixed” is a fixed benchmark based on the best performing States in 2010. “2005 Fixed” is a fixed benchmark based on the best performing States in 2005. “Floating” is based on the best performing States in each year so change from year to year. SOURCES CDC/NCHS, National Vital Statistics System, mortality data (see http://www.cdc.gov/nchs/deaths.htm); and CDC WONDER (see http://wonder.cdc.gov). REFERENCES Moy E, Garcia MC, Bastian B, Rossen LM, Ingram DD, Faul M, Massetti GM, Thomas CC, Hong Y, Yoon PW, Iademarco MF. Leading Causes of Death in Nonmetropolitan and Metropolitan Areas – United States, 1999-2014. MMWR Surveillance Summary 2017; 66(No. SS-1):1-8. Garcia MC, Faul M, Massetti G, Thomas CC, Hong Y, Bauer UE, Iademarco MF. Reducing Potentially Excess Deaths from the Five Leading Causes of Death in the Rural United States. MMWR Surveillance Summary 2017; 66(No. SS-2):1–7.