100+ datasets found
  1. p

    RCS Data Russia

    • listtodata.com
    • st.listtodata.com
    .csv, .xls, .txt
    Updated Jul 17, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    List to Data (2025). RCS Data Russia [Dataset]. https://listtodata.com/rcs-data-russia
    Explore at:
    .csv, .xls, .txtAvailable download formats
    Dataset updated
    Jul 17, 2025
    Authors
    List to Data
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Time period covered
    Jan 1, 2025 - Dec 31, 2025
    Area covered
    Russia
    Variables measured
    phone numbers, Email Address, full name, Address, City, State, gender,age,income,ip address,
    Description

    RCS Data Russia is an authentic dataset that you can get now. The database also comes with a replacement guarantee, meaning if any numbers are incorrect, they will be replaced. This makes sure you only get valid numbers. You won’t have to worry about old numbers, as the system automatically removes invalid data. This helps you connect with real people interested in your offers. Therefore, it makes your outreach more effective and reliable. It saves you from wasting time on dead numbers or inactive users. In addition, RCS Data Russia is the key to staying ahead in marketing. With updated information, you can always rely on accurate contacts. This helps build trust with potential customers. Invalid data can harm your campaign, but this database removes all outdated or incorrect information. The focus stays on real, active contacts. As a result, your marketing efforts will be more successful. Russia RCS data is reliable and you can easily filter by gender, age, relationship status, and location. This makes finding the right audience super simple. The data is always valid, which means you won’t waste time on incorrect numbers. You can trust the accuracy of this database. Also, 24/7 support is always available. If you have questions, there is someone ready to help anytime. With valid data, reaching out to people who match your needs becomes easy and quick. You will save time and money while getting the best results for your business. Moreover, Russia RCS data is perfect for marketers and businesses. You can create targeted campaigns with the help of this database. This ensures you reach the right people who might have an interest in your product or service. Filtering by various details like age and location helps make your campaign specific and effective.

  2. n

    Data from: Evaluation of downscaled, gridded climate data for the...

    • data.niaid.nih.gov
    • search.dataone.org
    • +1more
    zip
    Updated Feb 13, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ruben Behnke; Stephen Vavrus; Andrew Allstadt; Thomas Albright; Wayne E. Thogmartin; Volker C. Radeloff (2016). Evaluation of downscaled, gridded climate data for the conterminous United States [Dataset]. http://doi.org/10.5061/dryad.7tv80
    Explore at:
    zipAvailable download formats
    Dataset updated
    Feb 13, 2016
    Dataset provided by
    University of Nevada, Reno
    University of Montana
    University of Wisconsin–Madison
    United States Geological Survey
    Authors
    Ruben Behnke; Stephen Vavrus; Andrew Allstadt; Thomas Albright; Wayne E. Thogmartin; Volker C. Radeloff
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Area covered
    Contiguous United States, United States, North America
    Description

    Weather and climate affect many ecological processes, making spatially continuous yet fine-resolution weather data desirable for ecological research and predictions. Numerous downscaled weather data sets exist, but little attempt has been made to evaluate them systematically. Here we address this shortcoming by focusing on four major questions: (1) How accurate are downscaled, gridded climate data sets in terms of temperature and precipitation estimates? (2) Are there significant regional differences in accuracy among data sets? (3) How accurate are their mean values compared with extremes? (4) Does their accuracy depend on spatial resolution? We compared eight widely used downscaled data sets that provide gridded daily weather data for recent decades across the United States. We found considerable differences among data sets and between downscaled and weather station data. Temperature is represented more accurately than precipitation, and climate averages are more accurate than weather extremes. The data set exhibiting the best agreement with station data varies among ecoregions. Surprisingly, the accuracy of the data sets does not depend on spatial resolution. Although some inherent differences among data sets and weather station data are to be expected, our findings highlight how much different interpolation methods affect downscaled weather data, even for local comparisons with nearby weather stations located inside a grid cell. More broadly, our results highlight the need for careful consideration among different available data sets in terms of which variables they describe best, where they perform best, and their resolution, when selecting a downscaled weather data set for a given ecological application.

  3. Data from: S1 Dataset -

    • plos.figshare.com
    bin
    Updated Aug 11, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esra Çınar Tanrıverdi; Sinan Yılmaz; Yasemin Çayır (2023). S1 Dataset - [Dataset]. http://doi.org/10.1371/journal.pone.0288769.s001
    Explore at:
    binAvailable download formats
    Dataset updated
    Aug 11, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Esra Çınar Tanrıverdi; Sinan Yılmaz; Yasemin Çayır
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Medical education can be a challenging and stressful process. Additional stressors can make the medical education process even more complex and impair a student’s attention and concentration. To the authors’ knowledge, there is no valid and reliable scale to measure medical school stress in Turkish medical students. Therefore, this study aimed to determine the validity and reliability of the Perceived Medical School Stress (PMSS) Scale in Turkish medical students. The Perceived Medical School Stress Scale is a self-assessment tool developed to measure medical school-induced stress in medical students. It consists of 13 items divided into two subdimensions. Scale items are answered using a four-point (0–4) Likert system The total score that can be obtained from the PMSS ranges from 0 to 52, with higher scores indicating higher levels of perceived stress. First, the scale was applied as a pilot to 52 students by performing the scale’s back-and-forth translation into Turkish. Then, the scale was applied to 612 volunteer medical students to ensure validity. Convergent validity and confirmatory factor analysis are used to assess the construct validity of a scale. Test-retest, item correlations, and Cronbach’s alpha coefficients are used to evaluate the reliability of a scale. As a result of confirmatory factor analysis, the two-factor structure of the original scale was confirmed. The fit indices of the model obtained showed excellent fit. The Generalized Anxiety Disorder-7 (GAD-7) Scale was used for convergent validity. The GAD-7 is a self-assessment tool that measures the level of generalized anxiety. It is answered with a four-point Likert scale for the last two weeks. The score that can be obtained from the scale is between 0–21. A score of ten or more indicates possible anxiety disorder. The students’ mean perceived medical school stress score was 39.80±8.09, and their GAD-7 score was 11.0±5.5. A significant positive relationship was found between the total scores of the scales (r = .48, P < .001). The Cronbach’s alpha value of the scale was .81, and test-retest reliability was significant for all scale items (P < .001 for all). No item was deleted according to Cronbach’s alpha values and item-total correlations. There was no significant relationship between Turkish version of the PMSS and GAD-7 scores and age, sex, income status, tobacco use, or exercise (P>.05). The Turkish version of the Perceived Medical School Stress Scale is a valid and reliable scale that can be used to investigate the medical school-specific stress of students.

  4. Data from: Getting into the game: evaluation of the reliability, validity...

    • tandf.figshare.com
    docx
    Updated Mar 30, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    F. Virginia Wright; Annemarie Wright; Catriona Bauve; Kerry Evans (2024). Getting into the game: evaluation of the reliability, validity and utility of the Ignite Challenge scale for school-aged children with autism spectrum disorder [Dataset]. http://doi.org/10.6084/m9.figshare.22681250.v1
    Explore at:
    docxAvailable download formats
    Dataset updated
    Mar 30, 2024
    Dataset provided by
    Taylor & Francishttps://taylorandfrancis.com/
    Authors
    F. Virginia Wright; Annemarie Wright; Catriona Bauve; Kerry Evans
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Evaluate reliability, concurrent validity and utility of the Ignite Challenge motor skills measure for children with autism spectrum disorder (ASD). In this measurement study, children completed the Ignite Challenge twice, 1–3 weeks apart. A physiotherapist assessor (one of seven) conducted a child’s test-retest assessments and scored administration ease and child engagement (/10 visual analogue scale). A second assessor rated baseline assessment videos. Validity data (parent-report PEDI-CAT) were collected at baseline. Reliability analysis employed ICCs (95% CI) and evaluated minimum detectable change (MDC80). Pearson’s correlations (r) estimated validity. Forty-seven children with ASD (mean 9.34 years [SD = 2.35]; 10 girls; independent social communication) were tested at baseline; 45 were retested. Ignite Challenge baseline and retest mean scores were 69.0% (SD = 17.1) and 69.5% (SD = 16.6) respectively, with excellent inter-rater/test-retest reliability (ICC = 0.96 [95% CI 0.92, 0.97] and ICC = 0.91 [95% CI 0.84, 0.95]) respectively, and MDC80 = 9.28. Administration ease and child engagement were 6.5/10 (SD = 2.4) and 6.7/10 (SD = 2.2). Ignite Challenge and PEDI-CAT Social/Mobility (n = 45) associations were r = 0.54 and 0.57. Minimal suggestions for measure revisions arose from child/assessor feedback. Ignite Challenge can reliably identify movement strengths and challenges of children with ASD. Use may permit more appropriate evaluation and goal setting within physical activity-based programs. Ignite Challenge is a reliable and valid advanced motor skills measure for children with Autism Spectrum Disorder (ASD), ages 6 years and up.Ignite Challenge can be reliably scored in-person (“live”) even with younger children and those requiring increased assessor attention to optimize engagement.Most children enjoyed playing the Ignite Challenge “mini games”—this positive engagement (“getting into the game”) helps support assessment of their best motor performance abilities.Ignite Challenge identifies motor-related challenges that impact a child’s physical activity participation, and thus informs meaningful goal setting/intervention with children with ASD. Ignite Challenge is a reliable and valid advanced motor skills measure for children with Autism Spectrum Disorder (ASD), ages 6 years and up. Ignite Challenge can be reliably scored in-person (“live”) even with younger children and those requiring increased assessor attention to optimize engagement. Most children enjoyed playing the Ignite Challenge “mini games”—this positive engagement (“getting into the game”) helps support assessment of their best motor performance abilities. Ignite Challenge identifies motor-related challenges that impact a child’s physical activity participation, and thus informs meaningful goal setting/intervention with children with ASD.

  5. d

    Retail Store Data: Accurate Places Data | Global | Location Data on 75M+...

    • datarade.ai
    .csv
    Updated Aug 22, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    SafeGraph (2024). Retail Store Data: Accurate Places Data | Global | Location Data on 75M+ Places [Dataset]. https://datarade.ai/data-products/retail-store-data-accurate-places-data-global-location-d-safegraph
    Explore at:
    .csvAvailable download formats
    Dataset updated
    Aug 22, 2024
    Dataset authored and provided by
    SafeGraph
    Area covered
    Kiribati, Lebanon, Kuwait, Libya, Saint Lucia, Azerbaijan, Bosnia and Herzegovina, Spain, Pitcairn, Georgia
    Description

    SafeGraph Places provides baseline information for every record in the SafeGraph product suite via the Places schema and polygon information when applicable via the Geometry schema. The current scope of a place is defined as any location humans can visit with the exception of single-family homes. This definition encompasses a diverse set of places ranging from restaurants, grocery stores, and malls; to parks, hospitals, museums, offices, and industrial parks. Premium sets of Places include apartment buildings, Parking Lots, and Point POIs (such as ATMs or transit stations).

    SafeGraph Places is a point of interest (POI) data offering with varying coverage depending on the country. Note that address conventions and formatting vary across countries. SafeGraph has coalesced these fields into the Places schema.

  6. e

    ERA5 Land air temperature daily average

    • data.europa.eu
    • data.opendatascience.eu
    Updated Jul 15, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2022). ERA5 Land air temperature daily average [Dataset]. https://data.europa.eu/88u/dataset/45c626be-3b29-43ae-832e-6a9c70c5d8f6
    Explore at:
    Dataset updated
    Jul 15, 2022
    Description

    Overview: era5.copernicus: air temperature daily averages from 2000 to 2020 resampled with CHELSA to 1 km resolution

    Traceability (lineage): The data sources used to generate this dataset are ERA5-Land hourly data from 1950 to present (Copernicus Climate Data Store) and CHELSA monthly climatologies.

    Scientific methodology: The methodology used for downscaling follows established procedures as used by e.g. Worldclim and CHELSA.

    Usability: The substantial improvement of the spatial resolution together with the high temporal resolution of one day further improve the usability of the original ERA5 Land time series product which is useful for all kind of land surface applications such as flood or drought forecasting. The temporal and spatial resolution of this dataset, the period covered in time, as well as the fixed grid used for the data distribution at any period enables decisions makers, businesses and individuals to access and use more accurate information on land states.

    Uncertainty quantification: The ERA5-Land dataset, as any other simulation, provides estimates which have some degree of uncertainty. Numerical models can only provide a more or less accurate representation of the real physical processes governing different components of the Earth System. In general, the uncertainty of model estimates grows as we go back in time, because the number of observations available to create a good quality atmospheric forcing is lower. ERA5-land parameter fields can currently be used in combination with the uncertainty of the equivalent ERA5 fields.

    Data validation approaches: Validation of the ERA5 Land ddataset against multiple in-situ datasets is presented in the reference paper (Muñoz-Sabater et al., 2021).

    Completeness: The dataset covers the entire Geo-harmonizer region as defined by the landmask raster dataset. However, some small islands might be missing if there are no data in the original ERA5 Land dataset.

    Consistency: ERA5-Land is a reanalysis dataset providing a consistent view of the evolution of land variables over several decades at an enhanced resolution compared to ERA5. ERA5-Land has been produced by replaying the land component of the ECMWF ERA5 climate reanalysis. Reanalysis combines model data with observations from across the world into a globally complete and consistent dataset using the laws of physics. Reanalysis produces data that goes several decades back in time, providing an accurate description of the climate of the past.

    Positional accuracy: 1 km spatial resolution

    Temporal accuracy: Daily maps for the years 2020-2020.

    Thematic accuracy: The raster values represent minimum, mean, and maximum daily air temperature 2m above ground in degrees Celsius x 10.

  7. Data from: 2012 USDA Plant Hardiness Zone Map Mean Annual Extreme Low...

    • catalog.data.gov
    • agdatacommons.nal.usda.gov
    • +1more
    Updated Oct 2, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Agricultural Research Service (2025). 2012 USDA Plant Hardiness Zone Map Mean Annual Extreme Low Temperature Rasters [Dataset]. https://catalog.data.gov/dataset/2012-usda-plant-hardiness-zone-map-mean-annual-extreme-low-temperature-rasters
    Explore at:
    Dataset updated
    Oct 2, 2025
    Dataset provided by
    Agricultural Research Servicehttps://www.ars.usda.gov/
    Description

    These rasters provide the local mean annual extreme low temperature from 1976 to 2005 in an 800m x 800m grid covering the USA (including Puerto Rico) based on interpolation of data from more than a thousand weather stations. Each location's Plant Hardiness Zone is calculated based on classifying that temperature into 5 degree bands. The classified rasters are then used to create print and interactive maps. A complex algorithm was used for this edition of the USDA Plant Hardiness Zone Map (PHZM) to enable more accurate interpolation between weather reporting stations. This new method takes into account factors such as elevation changes and proximity to bodies of water, which enabled mapping of more accurate zones.Temperature station data for this edition of the USDA PHZM came from several different sources. In the eastern and central United States, Puerto Rico, and Hawaii, nearly all the data came from weather stations of the National Weather Service. In the western United States and Alaska, data from stations maintained by USDA Natural Resources Conservation Service, USDA Forest Service, U.S. Department of the Interior (DOI) Bureau of Reclamation, and DOI Bureau of Land Management also helped to better define hardiness zones in mountainous areas. Environment Canada provided data from Canadian stations, and data from Mexican stations came from the Global Historical Climate Network.All of these data were carefully examined to ensure that only the most reliable were used in the mapping. In the end, data from a total of 7,983 stations were incorporated into the maps. The USDA PHZM was produced with the latest version of PRISM, a highly sophisticated climate mapping technology developed at Oregon State University. The map was produced from a digital computer grid, with each cell measuring about a half a mile on a side. PRISM estimated the mean annual extreme minimum temperature for each grid cell (or pixel on the map) by examining data from nearby stations; determining how the temperature changed with elevation; and accounting for possible coastal effects, temperature inversions, and the type of topography (ridge top, hill slope, or valley bottom).Information on PRISM can be obtained from the PRISM Climate Group website (http://prism.oregonstate.edu).Once a draft of the map was completed, it was reviewed by a team of climatologists, agricultural meteorologists, and horticultural experts. If the zone for an area appeared anomalous to these expert reviewers, experts doublechecked for errors or biases.For example, zones along the Canadian border in the Northern Plains initially appeared slightly too warm to several members of the review team who are experts in this region. It was found that there were very few weather reporting stations along the border in the United States in that area. Data from Canadian reporting stations were added, and the zones in that region are now more accurately represented. In another example, a reviewer noted that areas along the relatively mild New Jersey coastline that were distant from observing stations appeared to be too cold. This was remedied by increasing the PRISM algorithm’s sensitivity to coastal proximity, resulting in a mild coastal strip that is more consistently delineated up and down along the shoreline.On the other hand, a reviewer familiar with Maryland’s Eastern Shore thought the zones there seemed too warm. The data were doublechecked and no biases were found; the zone designations remained unchanged.The zones in this edition were calculated based on 1976-2005 temperature data. Each zone represents the average annual extreme minimum temperature for an area, reflecting the temperatures recorded for each of the years 1976-2005. This does not represent the coldest it has ever been or ever will be in an area, but it reflects the average lowest winter temperature for a given geographic area for this time period. This average value became the standard for assigning zones in the 1960s. The previous edition of the USDA Plant Hardiness Zone Map, which was revised and published in 1990, was drawn from weather data from 1974 to 1986.A detailed explanation of the mapmaking process and a discussion of the horticultural applications of the new PHZM are available from the articles listed below.Daly, C., M.P. Widrlechner, M.D. Halbleib, J.I. Smith, and W.P. Gibson. 2012. Development of a new USDA Plant Hardiness Zone Map for the United States. Journal of Applied Meteorology and Climatology, 51: 242-264. Link to articleWidrlechner, M.P., C. Daly, M. Keller, and K. Kaplan. 2012. Horticultural Applications of a Newly Revised USDA Plant Hardiness Zone Map. HortTechnology, 22: 6-19. Link to article

  8. d

    August 2024 data-update for "Updated science-wide author databases of...

    • elsevier.digitalcommonsdata.com
    Updated Sep 16, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    John P.A. Ioannidis (2024). August 2024 data-update for "Updated science-wide author databases of standardized citation indicators" [Dataset]. http://doi.org/10.17632/btchxktzyw.7
    Explore at:
    Dataset updated
    Sep 16, 2024
    Authors
    John P.A. Ioannidis
    License

    Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
    License information was derived automatically

    Description

    Citation metrics are widely used and misused. We have created a publicly available database of top-cited scientists that provides standardized information on citations, h-index, co-authorship adjusted hm-index, citations to papers in different authorship positions and a composite indicator (c-score). Separate data are shown for career-long and, separately, for single recent year impact. Metrics with and without self-citations and ratio of citations to citing papers are given and data on retracted papers (based on Retraction Watch database) as well as citations to/from retracted papers have been added in the most recent iteration. Scientists are classified into 22 scientific fields and 174 sub-fields according to the standard Science-Metrix classification. Field- and subfield-specific percentiles are also provided for all scientists with at least 5 papers. Career-long data are updated to end-of-2023 and single recent year data pertain to citations received during calendar year 2023. The selection is based on the top 100,000 scientists by c-score (with and without self-citations) or a percentile rank of 2% or above in the sub-field. This version (7) is based on the August 1, 2024 snapshot from Scopus, updated to end of citation year 2023. This work uses Scopus data. Calculations were performed using all Scopus author profiles as of August 1, 2024. If an author is not on the list it is simply because the composite indicator value was not high enough to appear on the list. It does not mean that the author does not do good work. PLEASE ALSO NOTE THAT THE DATABASE HAS BEEN PUBLISHED IN AN ARCHIVAL FORM AND WILL NOT BE CHANGED. The published version reflects Scopus author profiles at the time of calculation. We thus advise authors to ensure that their Scopus profiles are accurate. REQUESTS FOR CORRECIONS OF THE SCOPUS DATA (INCLUDING CORRECTIONS IN AFFILIATIONS) SHOULD NOT BE SENT TO US. They should be sent directly to Scopus, preferably by use of the Scopus to ORCID feedback wizard (https://orcid.scopusfeedback.com/) so that the correct data can be used in any future annual updates of the citation indicator databases. The c-score focuses on impact (citations) rather than productivity (number of publications) and it also incorporates information on co-authorship and author positions (single, first, last author). If you have additional questions, see attached file on FREQUENTLY ASKED QUESTIONS. Finally, we alert users that all citation metrics have limitations and their use should be tempered and judicious. For more reading, we refer to the Leiden manifesto: https://www.nature.com/articles/520429a

  9. p

    Macedonia Number Dataset

    • listtodata.com
    .csv, .xls, .txt
    Updated Jul 17, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    List to Data (2025). Macedonia Number Dataset [Dataset]. https://listtodata.com/macedonia-dataset
    Explore at:
    .csv, .xls, .txtAvailable download formats
    Dataset updated
    Jul 17, 2025
    Authors
    List to Data
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Time period covered
    Jan 1, 2025 - Dec 31, 2025
    Area covered
    Mozambique, Saint Lucia, Guatemala, Estonia, Namibia, Jersey, India, Jamaica, Bahrain, Tajikistan
    Variables measured
    phone numbers, Email Address, full name, Address, City, State, gender,age,income,ip address,
    Description

    Macedonia number dataset is a collection of phone numbers from people living in Macedonia. You can filter the data by gender, age, and relationship status. This flexibility helps you connect with the right audience. If you want to reach young adults or families, you can quickly find the right numbers. This makes your communication more effective and targeted. List to Data helps find phone numbers for your business. Additionally, the Macedonia number dataset follows GDPR rules. These rules protect people’s privacy and ensure that all data usage is legal. You can remove invalid data, keeping only active, accurate numbers. This helps update your list as numbers change. With this database, you have access to information that is not only reliable but also respectful of privacy. Macedonia phone data refers to a database of phone numbers that is 100% correct and valid. We carefully check every number in this database to ensure it works. This means businesses can call these numbers confidently, knowing they will reach real people. If you find a number that doesn’t work, you have a replacement guarantee. This means the company will give you a new number for free. Therefore, your contact list stays fresh and reliable. Furthermore, all phone numbers in this Macedonia phone data are based on a customer permission basis. This means each person included their number in the database. They know they use their information safely and ethically. You can trust this data for marketing or outreach efforts. Overall, phone data from Macedonia provides a strong foundation for any outreach campaign. Macedonia phone number list is a valuable tool that allows you to filter information based on specific needs. This list is helpful for businesses and organizations that want to reach out to people in this country. The phone numbers come from trusted sources, meaning companies gather data from reliable sources. You can also check the source URLs to see where the information comes from. Moreover, the Macedonia phone number list follows an opt-in process. This means that everyone on the list of Macedonia agreed to share their phone number. They understand that they will use their information and permit it. This ensures the data is legal and respectful of people’s privacy. Businesses can use the list without worrying about breaking any rules.

  10. World Health Survey 2003 - Georgia

    • microdata.worldbank.org
    • apps.who.int
    • +2more
    Updated Oct 17, 2013
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    World Health Organization (WHO) (2013). World Health Survey 2003 - Georgia [Dataset]. https://microdata.worldbank.org/index.php/catalog/1714
    Explore at:
    Dataset updated
    Oct 17, 2013
    Dataset provided by
    World Health Organizationhttps://who.int/
    Authors
    World Health Organization (WHO)
    Time period covered
    2003
    Area covered
    Georgia
    Description

    Abstract

    Different countries have different health outcomes that are in part due to the way respective health systems perform. Regardless of the type of health system, individuals will have health and non-health expectations in terms of how the institution responds to their needs. In many countries, however, health systems do not perform effectively and this is in part due to lack of information on health system performance, and on the different service providers.

    The aim of the WHO World Health Survey is to provide empirical data to the national health information systems so that there is a better monitoring of health of the people, responsiveness of health systems and measurement of health-related parameters.

    The overall aims of the survey is to examine the way populations report their health, understand how people value health states, measure the performance of health systems in relation to responsiveness and gather information on modes and extents of payment for health encounters through a nationally representative population based community survey. In addition, it addresses various areas such as health care expenditures, adult mortality, birth history, various risk factors, assessment of main chronic health conditions and the coverage of health interventions, in specific additional modules.

    The objectives of the survey programme are to: 1. develop a means of providing valid, reliable and comparable information, at low cost, to supplement the information provided by routine health information systems. 2. build the evidence base necessary for policy-makers to monitor if health systems are achieving the desired goals, and to assess if additional investment in health is achieving the desired outcomes. 3. provide policy-makers with the evidence they need to adjust their policies, strategies and programmes as necessary.

    Geographic coverage

    The survey sampling frame must cover 100% of the country's eligible population, meaning that the entire national territory must be included. This does not mean that every province or territory need be represented in the survey sample but, rather, that all must have a chance (known probability) of being included in the survey sample.

    There may be exceptional circumstances that preclude 100% national coverage. Certain areas in certain countries may be impossible to include due to reasons such as accessibility or conflict. All such exceptions must be discussed with WHO sampling experts. If any region must be excluded, it must constitute a coherent area, such as a particular province or region. For example if ¾ of region D in country X is not accessible due to war, the entire region D will be excluded from analysis.

    Analysis unit

    Households and individuals

    Universe

    The WHS will include all male and female adults (18 years of age and older) who are not out of the country during the survey period. It should be noted that this includes the population who may be institutionalized for health reasons at the time of the survey: all persons who would have fit the definition of household member at the time of their institutionalisation are included in the eligible population.

    If the randomly selected individual is institutionalized short-term (e.g. a 3-day stay at a hospital) the interviewer must return to the household when the individual will have come back to interview him/her. If the randomly selected individual is institutionalized long term (e.g. has been in a nursing home the last 8 years), the interviewer must travel to that institution to interview him/her.

    The target population includes any adult, male or female age 18 or over living in private households. Populations in group quarters, on military reservations, or in other non-household living arrangements will not be eligible for the study. People who are in an institution due to a health condition (such as a hospital, hospice, nursing home, home for the aged, etc.) at the time of the visit to the household are interviewed either in the institution or upon their return to their household if this is within a period of two weeks from the first visit to the household.

    Kind of data

    Sample survey data [ssd]

    Sampling procedure

    SAMPLING GUIDELINES FOR WHS

    Surveys in the WHS program must employ a probability sampling design. This means that every single individual in the sampling frame has a known and non-zero chance of being selected into the survey sample. While a Single Stage Random Sample is ideal if feasible, it is recognized that most sites will carry out Multi-stage Cluster Sampling.

    The WHS sampling frame should cover 100% of the eligible population in the surveyed country. This means that every eligible person in the country has a chance of being included in the survey sample. It also means that particular ethnic groups or geographical areas may not be excluded from the sampling frame.

    The sample size of the WHS in each country is 5000 persons (exceptions considered on a by-country basis). An adequate number of persons must be drawn from the sampling frame to account for an estimated amount of non-response (refusal to participate, empty houses etc.). The highest estimate of potential non-response and empty households should be used to ensure that the desired sample size is reached at the end of the survey period. This is very important because if, at the end of data collection, the required sample size of 5000 has not been reached additional persons must be selected randomly into the survey sample from the sampling frame. This is both costly and technically complicated (if this situation is to occur, consult WHO sampling experts for assistance), and best avoided by proper planning before data collection begins.

    All steps of sampling, including justification for stratification, cluster sizes, probabilities of selection, weights at each stage of selection, and the computer program used for randomization must be communicated to WHO

    STRATIFICATION

    Stratification is the process by which the population is divided into subgroups. Sampling will then be conducted separately in each subgroup. Strata or subgroups are chosen because evidence is available that they are related to the outcome (e.g. health, responsiveness, mortality, coverage etc.). The strata chosen will vary by country and reflect local conditions. Some examples of factors that can be stratified on are geography (e.g. North, Central, South), level of urbanization (e.g. urban, rural), socio-economic zones, provinces (especially if health administration is primarily under the jurisdiction of provincial authorities), or presence of health facility in area. Strata to be used must be identified by each country and the reasons for selection explicitly justified.

    Stratification is strongly recommended at the first stage of sampling. Once the strata have been chosen and justified, all stages of selection will be conducted separately in each stratum. We recommend stratifying on 3-5 factors. It is optimum to have half as many strata (note the difference between stratifying variables, which may be such variables as gender, socio-economic status, province/region etc. and strata, which are the combination of variable categories, for example Male, High socio-economic status, Xingtao Province would be a stratum).

    Strata should be as homogenous as possible within and as heterogeneous as possible between. This means that strata should be formulated in such a way that individuals belonging to a stratum should be as similar to each other with respect to key variables as possible and as different as possible from individuals belonging to a different stratum. This maximises the efficiency of stratification in reducing sampling variance.

    MULTI-STAGE CLUSTER SELECTION

    A cluster is a naturally occurring unit or grouping within the population (e.g. enumeration areas, cities, universities, provinces, hospitals etc.); it is a unit for which the administrative level has clear, nonoverlapping boundaries. Cluster sampling is useful because it avoids having to compile exhaustive lists of every single person in the population. Clusters should be as heterogeneous as possible within and as homogenous as possible between (note that this is the opposite criterion as that for strata). Clusters should be as small as possible (i.e. large administrative units such as Provinces or States are not good clusters) but not so small as to be homogenous.

    In cluster sampling, a number of clusters are randomly selected from a list of clusters. Then, either all members of the chosen cluster or a random selection from among them are included in the sample. Multistage sampling is an extension of cluster sampling where a hierarchy of clusters are chosen going from larger to smaller.

    In order to carry out multi-stage sampling, one needs to know only the population sizes of the sampling units. For the smallest sampling unit above the elementary unit however, a complete list of all elementary units (households) is needed; in order to be able to randomly select among all households in the TSU, a list of all those households is required. This information may be available from the most recent population census. If the last census was >3 years ago or the information furnished by it was of poor quality or unreliable, the survey staff will have the task of enumerating all households in the smallest randomly selected sampling unit. It is very important to budget for this step if it is necessary and ensure that all households are properly enumerated in order that a representative sample is obtained.

    It is always best to have as many clusters in the PSU as possible. The reason for this is that the fewer the number of respondents in each PSU, the lower will be the clustering effect which

  11. World Health Survey 2003 - Belgium

    • microdata.worldbank.org
    • catalog.ihsn.org
    • +2more
    Updated Oct 17, 2013
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    World Health Organization (WHO) (2013). World Health Survey 2003 - Belgium [Dataset]. https://microdata.worldbank.org/index.php/catalog/1694
    Explore at:
    Dataset updated
    Oct 17, 2013
    Dataset provided by
    World Health Organizationhttps://who.int/
    Authors
    World Health Organization (WHO)
    Time period covered
    2003
    Area covered
    Belgium
    Description

    Abstract

    Different countries have different health outcomes that are in part due to the way respective health systems perform. Regardless of the type of health system, individuals will have health and non-health expectations in terms of how the institution responds to their needs. In many countries, however, health systems do not perform effectively and this is in part due to lack of information on health system performance, and on the different service providers.

    The aim of the WHO World Health Survey is to provide empirical data to the national health information systems so that there is a better monitoring of health of the people, responsiveness of health systems and measurement of health-related parameters.

    The overall aims of the survey is to examine the way populations report their health, understand how people value health states, measure the performance of health systems in relation to responsiveness and gather information on modes and extents of payment for health encounters through a nationally representative population based community survey. In addition, it addresses various areas such as health care expenditures, adult mortality, birth history, various risk factors, assessment of main chronic health conditions and the coverage of health interventions, in specific additional modules.

    The objectives of the survey programme are to: 1. develop a means of providing valid, reliable and comparable information, at low cost, to supplement the information provided by routine health information systems. 2. build the evidence base necessary for policy-makers to monitor if health systems are achieving the desired goals, and to assess if additional investment in health is achieving the desired outcomes. 3. provide policy-makers with the evidence they need to adjust their policies, strategies and programmes as necessary.

    Geographic coverage

    The survey sampling frame must cover 100% of the country's eligible population, meaning that the entire national territory must be included. This does not mean that every province or territory need be represented in the survey sample but, rather, that all must have a chance (known probability) of being included in the survey sample.

    There may be exceptional circumstances that preclude 100% national coverage. Certain areas in certain countries may be impossible to include due to reasons such as accessibility or conflict. All such exceptions must be discussed with WHO sampling experts. If any region must be excluded, it must constitute a coherent area, such as a particular province or region. For example if ¾ of region D in country X is not accessible due to war, the entire region D will be excluded from analysis.

    Analysis unit

    Households and individuals

    Universe

    The WHS will include all male and female adults (18 years of age and older) who are not out of the country during the survey period. It should be noted that this includes the population who may be institutionalized for health reasons at the time of the survey: all persons who would have fit the definition of household member at the time of their institutionalisation are included in the eligible population.

    If the randomly selected individual is institutionalized short-term (e.g. a 3-day stay at a hospital) the interviewer must return to the household when the individual will have come back to interview him/her. If the randomly selected individual is institutionalized long term (e.g. has been in a nursing home the last 8 years), the interviewer must travel to that institution to interview him/her.

    The target population includes any adult, male or female age 18 or over living in private households. Populations in group quarters, on military reservations, or in other non-household living arrangements will not be eligible for the study. People who are in an institution due to a health condition (such as a hospital, hospice, nursing home, home for the aged, etc.) at the time of the visit to the household are interviewed either in the institution or upon their return to their household if this is within a period of two weeks from the first visit to the household.

    Kind of data

    Sample survey data [ssd]

    Sampling procedure

    SAMPLING GUIDELINES FOR WHS

    Surveys in the WHS program must employ a probability sampling design. This means that every single individual in the sampling frame has a known and non-zero chance of being selected into the survey sample. While a Single Stage Random Sample is ideal if feasible, it is recognized that most sites will carry out Multi-stage Cluster Sampling.

    The WHS sampling frame should cover 100% of the eligible population in the surveyed country. This means that every eligible person in the country has a chance of being included in the survey sample. It also means that particular ethnic groups or geographical areas may not be excluded from the sampling frame.

    The sample size of the WHS in each country is 5000 persons (exceptions considered on a by-country basis). An adequate number of persons must be drawn from the sampling frame to account for an estimated amount of non-response (refusal to participate, empty houses etc.). The highest estimate of potential non-response and empty households should be used to ensure that the desired sample size is reached at the end of the survey period. This is very important because if, at the end of data collection, the required sample size of 5000 has not been reached additional persons must be selected randomly into the survey sample from the sampling frame. This is both costly and technically complicated (if this situation is to occur, consult WHO sampling experts for assistance), and best avoided by proper planning before data collection begins.

    All steps of sampling, including justification for stratification, cluster sizes, probabilities of selection, weights at each stage of selection, and the computer program used for randomization must be communicated to WHO

    STRATIFICATION

    Stratification is the process by which the population is divided into subgroups. Sampling will then be conducted separately in each subgroup. Strata or subgroups are chosen because evidence is available that they are related to the outcome (e.g. health, responsiveness, mortality, coverage etc.). The strata chosen will vary by country and reflect local conditions. Some examples of factors that can be stratified on are geography (e.g. North, Central, South), level of urbanization (e.g. urban, rural), socio-economic zones, provinces (especially if health administration is primarily under the jurisdiction of provincial authorities), or presence of health facility in area. Strata to be used must be identified by each country and the reasons for selection explicitly justified.

    Stratification is strongly recommended at the first stage of sampling. Once the strata have been chosen and justified, all stages of selection will be conducted separately in each stratum. We recommend stratifying on 3-5 factors. It is optimum to have half as many strata (note the difference between stratifying variables, which may be such variables as gender, socio-economic status, province/region etc. and strata, which are the combination of variable categories, for example Male, High socio-economic status, Xingtao Province would be a stratum).

    Strata should be as homogenous as possible within and as heterogeneous as possible between. This means that strata should be formulated in such a way that individuals belonging to a stratum should be as similar to each other with respect to key variables as possible and as different as possible from individuals belonging to a different stratum. This maximises the efficiency of stratification in reducing sampling variance.

    MULTI-STAGE CLUSTER SELECTION

    A cluster is a naturally occurring unit or grouping within the population (e.g. enumeration areas, cities, universities, provinces, hospitals etc.); it is a unit for which the administrative level has clear, nonoverlapping boundaries. Cluster sampling is useful because it avoids having to compile exhaustive lists of every single person in the population. Clusters should be as heterogeneous as possible within and as homogenous as possible between (note that this is the opposite criterion as that for strata). Clusters should be as small as possible (i.e. large administrative units such as Provinces or States are not good clusters) but not so small as to be homogenous.

    In cluster sampling, a number of clusters are randomly selected from a list of clusters. Then, either all members of the chosen cluster or a random selection from among them are included in the sample. Multistage sampling is an extension of cluster sampling where a hierarchy of clusters are chosen going from larger to smaller.

    In order to carry out multi-stage sampling, one needs to know only the population sizes of the sampling units. For the smallest sampling unit above the elementary unit however, a complete list of all elementary units (households) is needed; in order to be able to randomly select among all households in the TSU, a list of all those households is required. This information may be available from the most recent population census. If the last census was >3 years ago or the information furnished by it was of poor quality or unreliable, the survey staff will have the task of enumerating all households in the smallest randomly selected sampling unit. It is very important to budget for this step if it is necessary and ensure that all households are properly enumerated in order that a representative sample is obtained.

    It is always best to have as many clusters in the PSU as possible. The reason for this is that the fewer the number of respondents in each PSU, the lower will be the clustering effect which

  12. World Health Survey 2003 - Guatemala

    • microdata.worldbank.org
    • apps.who.int
    • +2more
    Updated Oct 17, 2013
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    World Health Organization (WHO) (2013). World Health Survey 2003 - Guatemala [Dataset]. https://microdata.worldbank.org/index.php/catalog/1717
    Explore at:
    Dataset updated
    Oct 17, 2013
    Dataset provided by
    World Health Organizationhttps://who.int/
    Authors
    World Health Organization (WHO)
    Time period covered
    2003
    Area covered
    Guatemala
    Description

    Abstract

    Different countries have different health outcomes that are in part due to the way respective health systems perform. Regardless of the type of health system, individuals will have health and non-health expectations in terms of how the institution responds to their needs. In many countries, however, health systems do not perform effectively and this is in part due to lack of information on health system performance, and on the different service providers.

    The aim of the WHO World Health Survey is to provide empirical data to the national health information systems so that there is a better monitoring of health of the people, responsiveness of health systems and measurement of health-related parameters.

    The overall aims of the survey is to examine the way populations report their health, understand how people value health states, measure the performance of health systems in relation to responsiveness and gather information on modes and extents of payment for health encounters through a nationally representative population based community survey. In addition, it addresses various areas such as health care expenditures, adult mortality, birth history, various risk factors, assessment of main chronic health conditions and the coverage of health interventions, in specific additional modules.

    The objectives of the survey programme are to: 1. develop a means of providing valid, reliable and comparable information, at low cost, to supplement the information provided by routine health information systems. 2. build the evidence base necessary for policy-makers to monitor if health systems are achieving the desired goals, and to assess if additional investment in health is achieving the desired outcomes. 3. provide policy-makers with the evidence they need to adjust their policies, strategies and programmes as necessary.

    Geographic coverage

    The survey sampling frame must cover 100% of the country's eligible population, meaning that the entire national territory must be included. This does not mean that every province or territory need be represented in the survey sample but, rather, that all must have a chance (known probability) of being included in the survey sample.

    There may be exceptional circumstances that preclude 100% national coverage. Certain areas in certain countries may be impossible to include due to reasons such as accessibility or conflict. All such exceptions must be discussed with WHO sampling experts. If any region must be excluded, it must constitute a coherent area, such as a particular province or region. For example if ¾ of region D in country X is not accessible due to war, the entire region D will be excluded from analysis.

    Analysis unit

    Households and individuals

    Universe

    The WHS will include all male and female adults (18 years of age and older) who are not out of the country during the survey period. It should be noted that this includes the population who may be institutionalized for health reasons at the time of the survey: all persons who would have fit the definition of household member at the time of their institutionalisation are included in the eligible population.

    If the randomly selected individual is institutionalized short-term (e.g. a 3-day stay at a hospital) the interviewer must return to the household when the individual will have come back to interview him/her. If the randomly selected individual is institutionalized long term (e.g. has been in a nursing home the last 8 years), the interviewer must travel to that institution to interview him/her.

    The target population includes any adult, male or female age 18 or over living in private households. Populations in group quarters, on military reservations, or in other non-household living arrangements will not be eligible for the study. People who are in an institution due to a health condition (such as a hospital, hospice, nursing home, home for the aged, etc.) at the time of the visit to the household are interviewed either in the institution or upon their return to their household if this is within a period of two weeks from the first visit to the household.

    Kind of data

    Sample survey data [ssd]

    Sampling procedure

    SAMPLING GUIDELINES FOR WHS

    Surveys in the WHS program must employ a probability sampling design. This means that every single individual in the sampling frame has a known and non-zero chance of being selected into the survey sample. While a Single Stage Random Sample is ideal if feasible, it is recognized that most sites will carry out Multi-stage Cluster Sampling.

    The WHS sampling frame should cover 100% of the eligible population in the surveyed country. This means that every eligible person in the country has a chance of being included in the survey sample. It also means that particular ethnic groups or geographical areas may not be excluded from the sampling frame.

    The sample size of the WHS in each country is 5000 persons (exceptions considered on a by-country basis). An adequate number of persons must be drawn from the sampling frame to account for an estimated amount of non-response (refusal to participate, empty houses etc.). The highest estimate of potential non-response and empty households should be used to ensure that the desired sample size is reached at the end of the survey period. This is very important because if, at the end of data collection, the required sample size of 5000 has not been reached additional persons must be selected randomly into the survey sample from the sampling frame. This is both costly and technically complicated (if this situation is to occur, consult WHO sampling experts for assistance), and best avoided by proper planning before data collection begins.

    All steps of sampling, including justification for stratification, cluster sizes, probabilities of selection, weights at each stage of selection, and the computer program used for randomization must be communicated to WHO

    STRATIFICATION

    Stratification is the process by which the population is divided into subgroups. Sampling will then be conducted separately in each subgroup. Strata or subgroups are chosen because evidence is available that they are related to the outcome (e.g. health, responsiveness, mortality, coverage etc.). The strata chosen will vary by country and reflect local conditions. Some examples of factors that can be stratified on are geography (e.g. North, Central, South), level of urbanization (e.g. urban, rural), socio-economic zones, provinces (especially if health administration is primarily under the jurisdiction of provincial authorities), or presence of health facility in area. Strata to be used must be identified by each country and the reasons for selection explicitly justified.

    Stratification is strongly recommended at the first stage of sampling. Once the strata have been chosen and justified, all stages of selection will be conducted separately in each stratum. We recommend stratifying on 3-5 factors. It is optimum to have half as many strata (note the difference between stratifying variables, which may be such variables as gender, socio-economic status, province/region etc. and strata, which are the combination of variable categories, for example Male, High socio-economic status, Xingtao Province would be a stratum).

    Strata should be as homogenous as possible within and as heterogeneous as possible between. This means that strata should be formulated in such a way that individuals belonging to a stratum should be as similar to each other with respect to key variables as possible and as different as possible from individuals belonging to a different stratum. This maximises the efficiency of stratification in reducing sampling variance.

    MULTI-STAGE CLUSTER SELECTION

    A cluster is a naturally occurring unit or grouping within the population (e.g. enumeration areas, cities, universities, provinces, hospitals etc.); it is a unit for which the administrative level has clear, nonoverlapping boundaries. Cluster sampling is useful because it avoids having to compile exhaustive lists of every single person in the population. Clusters should be as heterogeneous as possible within and as homogenous as possible between (note that this is the opposite criterion as that for strata). Clusters should be as small as possible (i.e. large administrative units such as Provinces or States are not good clusters) but not so small as to be homogenous.

    In cluster sampling, a number of clusters are randomly selected from a list of clusters. Then, either all members of the chosen cluster or a random selection from among them are included in the sample. Multistage sampling is an extension of cluster sampling where a hierarchy of clusters are chosen going from larger to smaller.

    In order to carry out multi-stage sampling, one needs to know only the population sizes of the sampling units. For the smallest sampling unit above the elementary unit however, a complete list of all elementary units (households) is needed; in order to be able to randomly select among all households in the TSU, a list of all those households is required. This information may be available from the most recent population census. If the last census was >3 years ago or the information furnished by it was of poor quality or unreliable, the survey staff will have the task of enumerating all households in the smallest randomly selected sampling unit. It is very important to budget for this step if it is necessary and ensure that all households are properly enumerated in order that a representative sample is obtained.

    It is always best to have as many clusters in the PSU as possible. The reason for this is that the fewer the number of respondents in each PSU, the lower will be the clustering effect which

  13. World Health Survey 2003 - Spain

    • microdata.worldbank.org
    • apps.who.int
    • +2more
    Updated Oct 17, 2013
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    World Health Organization (WHO) (2013). World Health Survey 2003 - Spain [Dataset]. https://microdata.worldbank.org/index.php/catalog/1708
    Explore at:
    Dataset updated
    Oct 17, 2013
    Dataset provided by
    World Health Organizationhttps://who.int/
    Authors
    World Health Organization (WHO)
    Time period covered
    2003
    Area covered
    Spain
    Description

    Abstract

    Different countries have different health outcomes that are in part due to the way respective health systems perform. Regardless of the type of health system, individuals will have health and non-health expectations in terms of how the institution responds to their needs. In many countries, however, health systems do not perform effectively and this is in part due to lack of information on health system performance, and on the different service providers.

    The aim of the WHO World Health Survey is to provide empirical data to the national health information systems so that there is a better monitoring of health of the people, responsiveness of health systems and measurement of health-related parameters.

    The overall aims of the survey is to examine the way populations report their health, understand how people value health states, measure the performance of health systems in relation to responsiveness and gather information on modes and extents of payment for health encounters through a nationally representative population based community survey. In addition, it addresses various areas such as health care expenditures, adult mortality, birth history, various risk factors, assessment of main chronic health conditions and the coverage of health interventions, in specific additional modules.

    The objectives of the survey programme are to: 1. develop a means of providing valid, reliable and comparable information, at low cost, to supplement the information provided by routine health information systems. 2. build the evidence base necessary for policy-makers to monitor if health systems are achieving the desired goals, and to assess if additional investment in health is achieving the desired outcomes. 3. provide policy-makers with the evidence they need to adjust their policies, strategies and programmes as necessary.

    Geographic coverage

    The survey sampling frame must cover 100% of the country's eligible population, meaning that the entire national territory must be included. This does not mean that every province or territory need be represented in the survey sample but, rather, that all must have a chance (known probability) of being included in the survey sample.

    There may be exceptional circumstances that preclude 100% national coverage. Certain areas in certain countries may be impossible to include due to reasons such as accessibility or conflict. All such exceptions must be discussed with WHO sampling experts. If any region must be excluded, it must constitute a coherent area, such as a particular province or region. For example if ¾ of region D in country X is not accessible due to war, the entire region D will be excluded from analysis.

    Analysis unit

    Households and individuals

    Universe

    The WHS will include all male and female adults (18 years of age and older) who are not out of the country during the survey period. It should be noted that this includes the population who may be institutionalized for health reasons at the time of the survey: all persons who would have fit the definition of household member at the time of their institutionalisation are included in the eligible population.

    If the randomly selected individual is institutionalized short-term (e.g. a 3-day stay at a hospital) the interviewer must return to the household when the individual will have come back to interview him/her. If the randomly selected individual is institutionalized long term (e.g. has been in a nursing home the last 8 years), the interviewer must travel to that institution to interview him/her.

    The target population includes any adult, male or female age 18 or over living in private households. Populations in group quarters, on military reservations, or in other non-household living arrangements will not be eligible for the study. People who are in an institution due to a health condition (such as a hospital, hospice, nursing home, home for the aged, etc.) at the time of the visit to the household are interviewed either in the institution or upon their return to their household if this is within a period of two weeks from the first visit to the household.

    Kind of data

    Sample survey data [ssd]

    Sampling procedure

    SAMPLING GUIDELINES FOR WHS

    Surveys in the WHS program must employ a probability sampling design. This means that every single individual in the sampling frame has a known and non-zero chance of being selected into the survey sample. While a Single Stage Random Sample is ideal if feasible, it is recognized that most sites will carry out Multi-stage Cluster Sampling.

    The WHS sampling frame should cover 100% of the eligible population in the surveyed country. This means that every eligible person in the country has a chance of being included in the survey sample. It also means that particular ethnic groups or geographical areas may not be excluded from the sampling frame.

    The sample size of the WHS in each country is 5000 persons (exceptions considered on a by-country basis). An adequate number of persons must be drawn from the sampling frame to account for an estimated amount of non-response (refusal to participate, empty houses etc.). The highest estimate of potential non-response and empty households should be used to ensure that the desired sample size is reached at the end of the survey period. This is very important because if, at the end of data collection, the required sample size of 5000 has not been reached additional persons must be selected randomly into the survey sample from the sampling frame. This is both costly and technically complicated (if this situation is to occur, consult WHO sampling experts for assistance), and best avoided by proper planning before data collection begins.

    All steps of sampling, including justification for stratification, cluster sizes, probabilities of selection, weights at each stage of selection, and the computer program used for randomization must be communicated to WHO

    STRATIFICATION

    Stratification is the process by which the population is divided into subgroups. Sampling will then be conducted separately in each subgroup. Strata or subgroups are chosen because evidence is available that they are related to the outcome (e.g. health, responsiveness, mortality, coverage etc.). The strata chosen will vary by country and reflect local conditions. Some examples of factors that can be stratified on are geography (e.g. North, Central, South), level of urbanization (e.g. urban, rural), socio-economic zones, provinces (especially if health administration is primarily under the jurisdiction of provincial authorities), or presence of health facility in area. Strata to be used must be identified by each country and the reasons for selection explicitly justified.

    Stratification is strongly recommended at the first stage of sampling. Once the strata have been chosen and justified, all stages of selection will be conducted separately in each stratum. We recommend stratifying on 3-5 factors. It is optimum to have half as many strata (note the difference between stratifying variables, which may be such variables as gender, socio-economic status, province/region etc. and strata, which are the combination of variable categories, for example Male, High socio-economic status, Xingtao Province would be a stratum).

    Strata should be as homogenous as possible within and as heterogeneous as possible between. This means that strata should be formulated in such a way that individuals belonging to a stratum should be as similar to each other with respect to key variables as possible and as different as possible from individuals belonging to a different stratum. This maximises the efficiency of stratification in reducing sampling variance.

    MULTI-STAGE CLUSTER SELECTION

    A cluster is a naturally occurring unit or grouping within the population (e.g. enumeration areas, cities, universities, provinces, hospitals etc.); it is a unit for which the administrative level has clear, nonoverlapping boundaries. Cluster sampling is useful because it avoids having to compile exhaustive lists of every single person in the population. Clusters should be as heterogeneous as possible within and as homogenous as possible between (note that this is the opposite criterion as that for strata). Clusters should be as small as possible (i.e. large administrative units such as Provinces or States are not good clusters) but not so small as to be homogenous.

    In cluster sampling, a number of clusters are randomly selected from a list of clusters. Then, either all members of the chosen cluster or a random selection from among them are included in the sample. Multistage sampling is an extension of cluster sampling where a hierarchy of clusters are chosen going from larger to smaller.

    In order to carry out multi-stage sampling, one needs to know only the population sizes of the sampling units. For the smallest sampling unit above the elementary unit however, a complete list of all elementary units (households) is needed; in order to be able to randomly select among all households in the TSU, a list of all those households is required. This information may be available from the most recent population census. If the last census was >3 years ago or the information furnished by it was of poor quality or unreliable, the survey staff will have the task of enumerating all households in the smallest randomly selected sampling unit. It is very important to budget for this step if it is necessary and ensure that all households are properly enumerated in order that a representative sample is obtained.

    It is always best to have as many clusters in the PSU as possible. The reason for this is that the fewer the number of respondents in each PSU, the lower will be the clustering effect which

  14. Astrographic Catalog of Reference Stars

    • data.nasa.gov
    • s.cnmilf.com
    • +1more
    Updated Apr 1, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nasa.gov (2025). Astrographic Catalog of Reference Stars [Dataset]. https://data.nasa.gov/dataset/astrographic-catalog-of-reference-stars
    Explore at:
    Dataset updated
    Apr 1, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    For a number of years there has been a great demand for a high-density catalog of accurate stellar positions and proper motions that maintains a consistent system of reference over the entire sky. The Smithsonian Astrophysical Observatory Star Catalog (SAO; SAO Staff 1966) has partially met those requirements, but its positions brought to current epochs now contain errors on the order of 1 second of arc, plus the proper motions in the SAO differ systematically with one another depending on their source catalogs. With the completion of the Second Cape Photographic Catalogue (CPC2; de Vegt et al. 1989), a photographic survey comparable in density to the AGK3 (Dieckvoss 1975) was finally available for the southern hemisphere. These two catalogs were used as a base and matched against the AGK2 (Schorr & Kohlschuetter 1951-58), Yale photographic zones (Yale Trans., Vols. 11-32), First Cape Photographic Catalogue (CPC1; Jackson & Stoy 1954, 55, 58; Stoy 1966), Sydney Southern Star Catalogue (King & Lomb 1983), Sydney Zone Catalogue -48 to -54 degrees (Eichhorn et al. 1983), 124 meridian circle catalogs, and catalogs of recent epochs, such as the Carlsberg Meridian Catalogue, La Palma (CAMC), USNO Zodiacal Zone Catalog (Douglass & Harrington 1990), and the Perth 83 Catalogue (Harwood [1990]) to obtain as many input positions as possible. All positions were then reduced to the system of the FK4 (Fricke & Kopff 1963) using a combination of the FK4, the FK4 Supplement as improved by H. Schwan of the Astronomisches Rechen-Institut in Heidelberg, and the International Reference Stars (IRS; Corbin 1991), then combined with the CPC2 and AGK3. The total number of input positions from which the ACRS was formed is 1,643,783. The original catalog is divided into two parts. Part 1 contains the stars having better observational histories and, therefore, more reliable positions and proper motions. This part constitutes 78 percent of the catalog; the mean errors of the proper motions are +/-0.47 arcsec per century and +/-0.46 arcsec per century in right ascension and declination, respectively. The stars in Part 2 have poor observational histories and consist mostly of objects for which only two catalog positions in one or both coordinates were available for computing the proper motions. Where accuracy is the primary consideration, only the stars in Part 1 should be used, while if the highest possible density is desired, the two parts should be combined. The ACRS was compiled at the U. S. Naval Observatory with the intention that it be used for new reductions of the Astrographic Catalogue (AC) plates. These plates are small in area (2 x 2 deg) and the IRS is not dense enough. Whereas the ACRS was compiled using the same techniques developed to produce the IRS, it became clear as the work progressed that the ACRS would have applications far beyond its original purpose. With accurate positions and proper motions rigorously reduced to both the FK4 and FK5 (Fricke et al. 1988) systems, it does more than simply replace the SAO. Rather, it provides the uniform system of reference stars that has been needed for many years by those who require densities greater than the IRS and with high accuracy over a wide range of epochs. It is intended that, as additional observations become available, stars will be migrated from Part 2 to Part 1, with the hope that eventually the ACRS will be complete in one part. Additional details concerning the compilation and properties of the ACRS can be found in Corbin & Urban (1989) except that the star counts and errors given here supersede the ones given in 1989. The HEASARC revised this database table in August, 2005, in order to add Galactic coordinates. This is a service provided by NASA HEASARC .

  15. f

    Data from: Percent amplitude of fluctuation: A simple measure for...

    • datasetcatalog.nlm.nih.gov
    • plos.figshare.com
    Updated Jan 8, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jia, Xi-Ze; Liao, Wei; Wang, Ze; Wang, Jue; Ji, Gong-Jun; Zang, Yu-Feng; Zhang, Han; Sun, Jia-Wei; Lv, Ya-Ting; Liu, Dong-Qiang (2020). Percent amplitude of fluctuation: A simple measure for resting-state fMRI signal at single voxel level [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0000547868
    Explore at:
    Dataset updated
    Jan 8, 2020
    Authors
    Jia, Xi-Ze; Liao, Wei; Wang, Ze; Wang, Jue; Ji, Gong-Jun; Zang, Yu-Feng; Zhang, Han; Sun, Jia-Wei; Lv, Ya-Ting; Liu, Dong-Qiang
    Description

    The amplitude of low-frequency fluctuation (ALFF) measures resting-state functional magnetic resonance imaging (RS-fMRI) signal of each voxel. However, the unit of blood oxygenation level-dependent (BOLD) signal is arbitrary and hence ALFF is sensitive to the scale of raw signal. A well-accepted standardization procedure is to divide each voxel’s ALFF by the global mean ALFF, named mALFF. Although fractional ALFF (fALFF), a ratio of the ALFF to the total amplitude within the full frequency band, offers possible solution of the standardization, it actually mixes with the fluctuation power within the full frequency band and thus cannot reveal the true amplitude characteristics of a given frequency band. The current study borrowed the percent signal change in task fMRI studies and proposed percent amplitude of fluctuation (PerAF) for RS-fMRI. We firstly applied PerAF and mPerAF (i.e., divided by global mean PerAF) to eyes open (EO) vs. eyes closed (EC) RS-fMRI data. PerAF and mPerAF yielded prominently difference between EO and EC, being well consistent with previous studies. We secondly performed test-retest reliability analysis and found that (PerAF ≈ mPerAF ≈ mALFF) > (fALFF ≈ mfALFF). Head motion regression (Friston-24) increased the reliability of PerAF, but decreased all other metrics (e.g. mPerAF, mALFF, fALFF, and mfALFF). The above results suggest that mPerAF is a valid, more reliable, more straightforward, and hence a promising metric for voxel-level RS-fMRI studies. Future study could use both PerAF and mPerAF metrics. For prompting future application of PerAF, we implemented PerAF in a new version of REST package named RESTplus.

  16. E

    Fugro Cruise C16185 Line 1040, 75 kHz VMADCP

    • gcoos5.geos.tamu.edu
    • data.ioos.us
    • +2more
    Updated Sep 21, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rosemary Smith (2017). Fugro Cruise C16185 Line 1040, 75 kHz VMADCP [Dataset]. https://gcoos5.geos.tamu.edu/erddap/info/C16185_075_Line1040_0/index.html
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Sep 21, 2017
    Dataset provided by
    Gulf of Mexico Coastal Ocean Observing System (GCOOS)
    Authors
    Rosemary Smith
    Time period covered
    May 21, 2006
    Area covered
    Variables measured
    u, v, pg, amp, time, depth, pflag, uship, vship, heading, and 5 more
    Description

    Program of vessel mount ADCP measurements comprising a combination of 300kHz and 75kHz ADCP data collected in the vicinity of the Loop Current and drilling blocks between 2004 and 2007. _NCProperties=version=2,netcdf=4.7.4,hdf5=1.12.0, acknowledgement=Data collection funded by various oil industry operators cdm_data_type=TrajectoryProfile cdm_profile_variables=time cdm_trajectory_variables=trajectory CODAS_processing_note=

    CODAS processing note:

    Overview

    The CODAS database is a specialized storage format designed for shipboard ADCP data. "CODAS processing" uses this format to hold averaged shipboard ADCP velocities and other variables, during the stages of data processing. The CODAS database stores velocity profiles relative to the ship as east and north components along with position, ship speed, heading, and other variables. The netCDF short form contains ocean velocities relative to earth, time, position, transducer temperature, and ship heading; these are designed to be "ready for immediate use". The netCDF long form is just a dump of the entire CODAS database. Some variables are no longer used, and all have names derived from their original CODAS names, dating back to the late 1980's.

    Post-processing

    CODAS post-processing, i.e. that which occurs after the single-ping profiles have been vector-averaged and loaded into the CODAS database, includes editing (using automated algorithms and manual tools), rotation and scaling of the measured velocities, and application of a time-varying heading correction. Additional algorithms developed more recently include translation of the GPS positions to the transducer location, and averaging of ship's speed over the times of valid pings when Percent Good is reduced. Such post-processing is needed prior to submission of "processed ADCP data" to JASADCP or other archives.

    Full CODAS processing

    Whenever single-ping data have been recorded, full CODAS processing provides the best end product.

    Full CODAS processing starts with the single-ping velocities in beam coordinates. Based on the transducer orientation relative to the hull, the beam velocities are transformed to horizontal, vertical, and "error velocity" components. Using a reliable heading (typically from the ship's gyro compass), the velocities in ship coordinates are rotated into earth coordinates.

    Pings are grouped into an "ensemble" (usually 2-5 minutes duration) and undergo a suite of automated editing algorithms (removal of acoustic interference; identification of the bottom; editing based on thresholds; and specialized editing that targets CTD wire interference and "weak, biased profiles". The ensemble of single-ping velocities is then averaged using an iterative reference layer averaging scheme. Each ensemble is approximated as a single function of depth, with a zero-average over a reference layer plus a reference layer velocity for each ping. Adding the average of the single-ping reference layer velocities to the function of depth yields the ensemble-average velocity profile. These averaged profiles, along with ancillary measurements, are written to disk, and subsequently loaded into the CODAS database. Everything after this stage is "post-processing".

    note (time):

    Time is stored in the database using UTC Year, Month, Day, Hour, Minute, Seconds. Floating point time "Decimal Day" is the floating point interval in days since the start of the year, usually the year of the first day of the cruise.

    note (heading):

    CODAS processing uses heading from a reliable device, and (if available) uses a time-dependent correction by an accurate heading device. The reliable heading device is typically a gyro compass (for example, the Bridge gyro). Accurate heading devices can be POSMV, Seapath, Phins, Hydrins, MAHRS, or various Ashtech devices; this varies with the technology of the time. It is always confusing to keep track of the sign of the heading correction. Headings are written degrees, positive clockwise. setting up some variables:

    X = transducer angle (CONFIG1_heading_bias) positive clockwise (beam 3 angle relative to ship) G = Reliable heading (gyrocompass) A = Accurate heading dh = G - A = time-dependent heading correction (ANCIL2_watrk_hd_misalign)

    Rotation of the measured velocities into the correct coordinate system amounts to (u+i*v)*(exp(i*theta)) where theta is the sum of the corrected heading and the transducer angle.

    theta = X + (G - dh) = X + G - dh

    Watertrack and Bottomtrack calibrations give an indication of the residual angle offset to apply, for example if mean and median of the phase are all 0.5 (then R=0.5). Using the "rotate" command, the value of R is added to "ANCIL2_watrk_hd_misalign".

    new_dh = dh + R

    Therefore the total angle used in rotation is

    new_theta = X + G - dh_new = X + G - (dh + R) = (X - R) + (G - dh)

    The new estimate of the transducer angle is: X - R ANCIL2_watrk_hd_misalign contains: dh + R

    ====================================================

    Profile flags

    Profile editing flags are provided for each depth cell:

    binary decimal below Percent value value bottom Good bin -------+----------+--------+----------+-------+ 000 0 001 1 bad 010 2 bad 011 3 bad bad 100 4 bad 101 5 bad bad 110 6 bad bad 111 7 bad bad bad -------+----------+--------+----------+-------+

    CODAS_variables= Variables in this CODAS short-form Netcdf file are intended for most end-user scientific analysis and display purposes. For additional information see the CODAS_processing_note global attribute and the attributes of each of the variables.

    ============= ================================================================= time Time at the end of the ensemble, days from start of year. lon, lat Longitude, Latitude from GPS at the end of the ensemble. u,v Ocean zonal and meridional velocity component profiles. uship, vship Zonal and meridional velocity components of the ship. heading Mean ship heading during the ensemble. depth Bin centers in nominal meters (no sound speed profile correction). tr_temp ADCP transducer temperature. pg Percent Good pings for u, v averaging after editing. pflag Profile Flags based on editing, used to mask u, v. amp Received signal strength in ADCP-specific units; no correction for spreading or attenuation. ============= =================================================================

    contributor_name=RPS contributor_role=editor contributor_role_vocabulary=https://vocab.nerc.ac.uk/collection/G04/current/ Conventions=CF-1.6, ACDD-1.3, IOOS Metadata Profile Version 1.2, COARDS cruise_id=Fugro_wh75 description=Shipboard ADCP velocity profiles from Fugro_wh75 using instrument wh75 Easternmost_Easting=-89.72401111111111 featureType=TrajectoryProfile geospatial_bounds=LINESTRING (-90.02310555555556 27.051475, -89.72401111111111 27.237783333333333) geospatial_bounds_crs=EPSG:4326 geospatial_bounds_vertical_crs=EPSG:5703 geospatial_lat_max=27.237783333333333 geospatial_lat_min=27.051475 geospatial_lat_units=degrees_north geospatial_lon_max=-89.72401111111111 geospatial_lon_min=-90.02310555555556 geospatial_lon_units=degrees_east geospatial_vertical_max=651.63 geospatial_vertical_min=27.63 geospatial_vertical_positive=down geospatial_vertical_units=m hg_changeset=2924:48293b7d29a9 history=Created: 2019-07-15 17:47:26 UTC id=C16185_075_Line1040_0 infoUrl=ADD ME institution=GCOOS instrument=In Situ/Laboratory Instruments > Profilers/Sounders > Acoustic Sounders > ADCP > Acoustic Doppler Current Profiler keywords_vocabulary=GCMD Science Keywords naming_authority=edu.tamucc.gulfhub Northernmost_Northing=27.237783333333333 platform=ship platform_vocabulary=https://mmisw.org/ont/ioos/platform processing_level=QA'ed and checked by Oceanographer program=Oil and Gas Loop Current VMADCP Program project=O&G LC VMADCP Program software=pycurrents sonar=wh75 source=Current profiler sourceUrl=(local files) Southernmost_Northing=27.051475 standard_name_vocabulary=CF Standard Name Table v67 subsetVariables=time, longitude, latitude, depth, u, v time_coverage_duration=P0Y0M0DT4H14M9S time_coverage_end=2006-05-21T12:09:36Z time_coverage_resolution=P0Y0M0DT0H5M0S time_coverage_start=2006-05-21T07:55:27Z Westernmost_Easting=-90.02310555555556 yearbase=2006

  17. E

    Fugro Cruise C16185 Line 1054, 75 kHz VMADCP

    • gcoos5.geos.tamu.edu
    • data.ioos.us
    • +2more
    Updated Sep 21, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rosemary Smith (2017). Fugro Cruise C16185 Line 1054, 75 kHz VMADCP [Dataset]. https://gcoos5.geos.tamu.edu/erddap/info/C16185_075_Line1054_0/index.html
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Sep 21, 2017
    Dataset provided by
    Gulf of Mexico Coastal Ocean Observing System (GCOOS)
    Authors
    Rosemary Smith
    Time period covered
    May 25, 2006
    Area covered
    Variables measured
    u, v, pg, amp, time, depth, pflag, uship, vship, heading, and 5 more
    Description

    Program of vessel mount ADCP measurements comprising a combination of 300kHz and 75kHz ADCP data collected in the vicinity of the Loop Current and drilling blocks between 2004 and 2007. _NCProperties=version=2,netcdf=4.7.4,hdf5=1.12.0, acknowledgement=Data collection funded by various oil industry operators cdm_data_type=TrajectoryProfile cdm_profile_variables=time cdm_trajectory_variables=trajectory CODAS_processing_note=

    CODAS processing note:

    Overview

    The CODAS database is a specialized storage format designed for shipboard ADCP data. "CODAS processing" uses this format to hold averaged shipboard ADCP velocities and other variables, during the stages of data processing. The CODAS database stores velocity profiles relative to the ship as east and north components along with position, ship speed, heading, and other variables. The netCDF short form contains ocean velocities relative to earth, time, position, transducer temperature, and ship heading; these are designed to be "ready for immediate use". The netCDF long form is just a dump of the entire CODAS database. Some variables are no longer used, and all have names derived from their original CODAS names, dating back to the late 1980's.

    Post-processing

    CODAS post-processing, i.e. that which occurs after the single-ping profiles have been vector-averaged and loaded into the CODAS database, includes editing (using automated algorithms and manual tools), rotation and scaling of the measured velocities, and application of a time-varying heading correction. Additional algorithms developed more recently include translation of the GPS positions to the transducer location, and averaging of ship's speed over the times of valid pings when Percent Good is reduced. Such post-processing is needed prior to submission of "processed ADCP data" to JASADCP or other archives.

    Full CODAS processing

    Whenever single-ping data have been recorded, full CODAS processing provides the best end product.

    Full CODAS processing starts with the single-ping velocities in beam coordinates. Based on the transducer orientation relative to the hull, the beam velocities are transformed to horizontal, vertical, and "error velocity" components. Using a reliable heading (typically from the ship's gyro compass), the velocities in ship coordinates are rotated into earth coordinates.

    Pings are grouped into an "ensemble" (usually 2-5 minutes duration) and undergo a suite of automated editing algorithms (removal of acoustic interference; identification of the bottom; editing based on thresholds; and specialized editing that targets CTD wire interference and "weak, biased profiles". The ensemble of single-ping velocities is then averaged using an iterative reference layer averaging scheme. Each ensemble is approximated as a single function of depth, with a zero-average over a reference layer plus a reference layer velocity for each ping. Adding the average of the single-ping reference layer velocities to the function of depth yields the ensemble-average velocity profile. These averaged profiles, along with ancillary measurements, are written to disk, and subsequently loaded into the CODAS database. Everything after this stage is "post-processing".

    note (time):

    Time is stored in the database using UTC Year, Month, Day, Hour, Minute, Seconds. Floating point time "Decimal Day" is the floating point interval in days since the start of the year, usually the year of the first day of the cruise.

    note (heading):

    CODAS processing uses heading from a reliable device, and (if available) uses a time-dependent correction by an accurate heading device. The reliable heading device is typically a gyro compass (for example, the Bridge gyro). Accurate heading devices can be POSMV, Seapath, Phins, Hydrins, MAHRS, or various Ashtech devices; this varies with the technology of the time. It is always confusing to keep track of the sign of the heading correction. Headings are written degrees, positive clockwise. setting up some variables:

    X = transducer angle (CONFIG1_heading_bias) positive clockwise (beam 3 angle relative to ship) G = Reliable heading (gyrocompass) A = Accurate heading dh = G - A = time-dependent heading correction (ANCIL2_watrk_hd_misalign)

    Rotation of the measured velocities into the correct coordinate system amounts to (u+i*v)*(exp(i*theta)) where theta is the sum of the corrected heading and the transducer angle.

    theta = X + (G - dh) = X + G - dh

    Watertrack and Bottomtrack calibrations give an indication of the residual angle offset to apply, for example if mean and median of the phase are all 0.5 (then R=0.5). Using the "rotate" command, the value of R is added to "ANCIL2_watrk_hd_misalign".

    new_dh = dh + R

    Therefore the total angle used in rotation is

    new_theta = X + G - dh_new = X + G - (dh + R) = (X - R) + (G - dh)

    The new estimate of the transducer angle is: X - R ANCIL2_watrk_hd_misalign contains: dh + R

    ====================================================

    Profile flags

    Profile editing flags are provided for each depth cell:

    binary decimal below Percent value value bottom Good bin -------+----------+--------+----------+-------+ 000 0 001 1 bad 010 2 bad 011 3 bad bad 100 4 bad 101 5 bad bad 110 6 bad bad 111 7 bad bad bad -------+----------+--------+----------+-------+

    CODAS_variables= Variables in this CODAS short-form Netcdf file are intended for most end-user scientific analysis and display purposes. For additional information see the CODAS_processing_note global attribute and the attributes of each of the variables.

    ============= ================================================================= time Time at the end of the ensemble, days from start of year. lon, lat Longitude, Latitude from GPS at the end of the ensemble. u,v Ocean zonal and meridional velocity component profiles. uship, vship Zonal and meridional velocity components of the ship. heading Mean ship heading during the ensemble. depth Bin centers in nominal meters (no sound speed profile correction). tr_temp ADCP transducer temperature. pg Percent Good pings for u, v averaging after editing. pflag Profile Flags based on editing, used to mask u, v. amp Received signal strength in ADCP-specific units; no correction for spreading or attenuation. ============= =================================================================

    contributor_name=RPS contributor_role=editor contributor_role_vocabulary=https://vocab.nerc.ac.uk/collection/G04/current/ Conventions=CF-1.6, ACDD-1.3, IOOS Metadata Profile Version 1.2, COARDS cruise_id=Fugro_wh75 description=Shipboard ADCP velocity profiles from Fugro_wh75 using instrument wh75 Easternmost_Easting=-89.82341944444443 featureType=TrajectoryProfile geospatial_bounds=LINESTRING (-89.94420555555558 27.253375, -89.82341944444443 27.25493888888889) geospatial_bounds_crs=EPSG:4326 geospatial_bounds_vertical_crs=EPSG:5703 geospatial_lat_max=27.25493888888889 geospatial_lat_min=27.253375 geospatial_lat_units=degrees_north geospatial_lon_max=-89.82341944444443 geospatial_lon_min=-89.94420555555558 geospatial_lon_units=degrees_east geospatial_vertical_max=651.83 geospatial_vertical_min=27.83 geospatial_vertical_positive=down geospatial_vertical_units=m hg_changeset=2924:48293b7d29a9 history=Created: 2019-07-15 17:47:36 UTC id=C16185_075_Line1054_0 infoUrl=ADD ME institution=GCOOS instrument=In Situ/Laboratory Instruments > Profilers/Sounders > Acoustic Sounders > ADCP > Acoustic Doppler Current Profiler keywords_vocabulary=GCMD Science Keywords naming_authority=edu.tamucc.gulfhub Northernmost_Northing=27.25493888888889 platform=ship platform_vocabulary=https://mmisw.org/ont/ioos/platform processing_level=QA'ed and checked by Oceanographer program=Oil and Gas Loop Current VMADCP Program project=O&G LC VMADCP Program software=pycurrents sonar=wh75 source=Current profiler sourceUrl=(local files) Southernmost_Northing=27.253375 standard_name_vocabulary=CF Standard Name Table v67 subsetVariables=time, longitude, latitude, depth, u, v time_coverage_duration=P0Y0M0DT1H16M8S time_coverage_end=2006-05-25T12:05:45Z time_coverage_resolution=P0Y0M0DT0H4M59S time_coverage_start=2006-05-25T10:49:37Z Westernmost_Easting=-89.94420555555558 yearbase=2006

  18. Data_Sheet_1_Reliability and validity analysis of personality assessment...

    • frontiersin.figshare.com
    • datasetcatalog.nlm.nih.gov
    pdf
    Updated Jun 9, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yeye Wen; Baobin Li; Deyuan Chen; Tingshao Zhu (2023). Data_Sheet_1_Reliability and validity analysis of personality assessment model based on gait video.pdf [Dataset]. http://doi.org/10.3389/fnbeh.2022.901568.s001
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jun 9, 2023
    Dataset provided by
    Frontiers Mediahttp://www.frontiersin.org/
    Authors
    Yeye Wen; Baobin Li; Deyuan Chen; Tingshao Zhu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Personality affects an individual’s academic achievements, occupational tendencies, marriage quality and physical health, so more convenient and objective personality assessment methods are needed. Gait is a natural, stable, and easy-to-observe body movement that is closely related to personality. The purpose of this paper is to propose a personality assessment model based on gait video and evaluate the reliability and validity of the multidimensional model. This study recruited 152 participants and used cameras to record their gait videos. Each participant completed a 44-item Big Five Inventory (BFI-44) assessment. We constructed diverse static and dynamic time-frequency features based on gait skeleton coordinates, interframe differences, distances between joints, angles between joints, and wavelet decomposition coefficient arrays. We established multidimensional personality trait assessment models through machine learning algorithms and evaluated the criterion validity, split-half reliability, convergent validity, and discriminant validity of these models. The results showed that the reliability and validity of the Gaussian process regression (GPR) and linear regression (LR) models were best. The mean values of their criterion validity were 0.478 and 0.508, respectively, and the mean values of their split-half reliability were all greater than 0.8. In the formed multitrait-multimethod matrix, these methods also had higher convergent and discriminative validity. The proposed approach shows that gait video can be effectively used to evaluate personality traits, providing a new idea for the formation of convenient and non-invasive personality assessment methods.

  19. Z

    Complete Rxivist dataset of scraped biology preprint data

    • data.niaid.nih.gov
    • data-staging.niaid.nih.gov
    • +1more
    Updated Mar 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Abdill, Richard J.; Blekhman, Ran (2023). Complete Rxivist dataset of scraped biology preprint data [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_2529922
    Explore at:
    Dataset updated
    Mar 2, 2023
    Dataset provided by
    University of Minnesota
    Authors
    Abdill, Richard J.; Blekhman, Ran
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    rxivist.org allowed readers to sort and filter the tens of thousands of preprints posted to bioRxiv and medRxiv. Rxivist used a custom web crawler to index all papers posted to those two websites; this is a snapshot of Rxivist the production database. The version number indicates the date on which the snapshot was taken. See the included "README.md" file for instructions on how to use the "rxivist.backup" file to import data into a PostgreSQL database server.

    Please note this is a different repository than the one used for the Rxivist manuscript—that is in a separate Zenodo repository. You're welcome (and encouraged!) to use this data in your research, but please cite our paper, now published in eLife.

    Previous versions are also available pre-loaded into Docker images, available at blekhmanlab/rxivist_data.

    Version notes:

    2023-03-01

    The final Rxivist data upload, more than four years after the first and encompassing 223,541 preprints posted to bioRxiv and medRxiv through the end of February 2023.

    2020-12-07***

    In addition to bioRxiv preprints, the database now includes all medRxiv preprints as well.

    The website where a preprint was posted is now recorded in a new field in the "articles" table, called "repo".

    We've significantly refactored the web crawler to take advantage of developments with the bioRxiv API.

    The main difference is that preprints flagged as "published" by bioRxiv are no longer recorded on the same schedule that download metrics are updated: The Rxivist database should now record published DOI entries the same day bioRxiv detects them.

    Twitter metrics have returned, for the most part. Improvements with the Crossref Event Data API mean we can once again tally daily Twitter counts for all bioRxiv DOIs.

    The "crossref_daily" table remains where these are recorded, and daily numbers are now up to date.

    Historical daily counts have also been re-crawled to fill in the empty space that started in October 2019.

    There are still several gaps that are more than a week long due to missing data from Crossref.

    We have recorded available Crossref Twitter data for all papers with DOI numbers starting with "10.1101," which includes all medRxiv preprints. However, there appears to be almost no Twitter data available for medRxiv preprints.

    The download metrics for article id 72514 (DOI 10.1101/2020.01.30.927871) were found to be out of date for February 2020 and are now correct. This is notable because article 72514 is the most downloaded preprint of all time; we're still looking into why this wasn't updated after the month ended.

    2020-11-18

    Publication checks should be back on schedule.

    2020-10-26

    This snapshot fixes most of the data issues found in the previous version. Indexed papers are now up to date, and download metrics are back on schedule. The check for publication status remains behind schedule, however, and the database may not include published DOIs for papers that have been flagged on bioRxiv as "published" over the last two months. Another snapshot will be posted in the next few weeks with updated publication information.

    2020-09-15

    A crawler error caused this snapshot to exclude all papers posted after about August 29, with some papers having download metrics that were more out of date than usual. The "last_crawled" field is accurate.

    2020-09-08

    This snapshot is misconfigured and will not work without modification; it has been replaced with version 2020-09-15.

    2019-12-27

    Several dozen papers did not have dates associated with them; that has been fixed.

    Some authors have had two entries in the "authors" table for portions of 2019, one profile that was linked to their ORCID and one that was not, occasionally with almost identical "name" strings. This happened after bioRxiv began changing author names to reflect the names in the PDFs, rather than the ones manually entered into their system. These database records are mostly consolidated now, but some may remain.

    2019-11-29

    The Crossref Event Data API remains down; Twitter data is unavailable for dates after early October.

    2019-10-31

    The Crossref Event Data API is still experiencing problems; the Twitter data for October is incomplete in this snapshot.

    The README file has been modified to reflect changes in the process for creating your own DB snapshots if using the newly released PostgreSQL 12.

    2019-10-01

    The Crossref API is back online, and the "crossref_daily" table should now include up-to-date tweet information for July through September.

    About 40,000 authors were removed from the author table because the name had been removed from all preprints they had previously been associated with, likely because their name changed slightly on the bioRxiv website ("John Smith" to "J Smith" or "John M Smith"). The "author_emails" table was also modified to remove entries referring to the deleted authors. The web crawler is being updated to clean these orphaned entries more frequently.

    2019-08-30

    The Crossref Event Data API, which provides the data used to populate the table of tweet counts, has not been fully functional since early July. While we are optimistic that accurate tweet counts will be available at some point, the sparse values currently in the "crossref_daily" table for July and August should not be considered reliable.

    2019-07-01

    A new "institution" field has been added to the "article_authors" table that stores each author's institutional affiliation as listed on that paper. The "authors" table still has each author's most recently observed institution.

    We began collecting this data in the middle of May, but it has not been applied to older papers yet.

    2019-05-11

    The README was updated to correct a link to the Docker repository used for the pre-built images.

    2019-03-21

    The license for this dataset has been changed to CC-BY, which allows use for any purpose and requires only attribution.

    A new table, "publication_dates," has been added and will be continually updated. This table will include an entry for each preprint that has been published externally for which we can determine a date of publication, based on data from Crossref. (This table was previously included in the "paper" schema but was not updated after early December 2018.)

    Foreign key constraints have been added to almost every table in the database. This should not impact any read behavior, but anyone writing to these tables will encounter constraints on existing fields that refer to other tables. Most frequently, this means the "article" field in a table will need to refer to an ID that actually exists in the "articles" table.

    The "author_translations" table has been removed. This was used to redirect incoming requests for outdated author profile pages and was likely not of any functional use to others.

    The "README.md" file has been renamed "1README.md" because Zenodo only displays a preview for the file that appears first in the list alphabetically.

    The "article_ranks" and "article_ranks_working" tables have been removed as well; they were unused.

    2019-02-13.1

    After consultation with bioRxiv, the "fulltext" table will not be included in further snapshots until (and if) concerns about licensing and copyright can be resolved.

    The "docker-compose.yml" file was added, with corresponding instructions in the README to streamline deployment of a local copy of this database.

    2019-02-13

    The redundant "paper" schema has been removed.

    BioRxiv has begun making the full text of preprints available online. Beginning with this version, a new table ("fulltext") is available that contains the text of preprints that have been processed already. The format in which this information is stored may change in the future; any digression will be noted here.

    This is the first version that has a corresponding Docker image.

  20. World Health Survey 2003 - Ukraine

    • microdata.worldbank.org
    • catalog.ihsn.org
    • +2more
    Updated Oct 17, 2013
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    World Health Organization (WHO) (2013). World Health Survey 2003 - Ukraine [Dataset]. https://microdata.worldbank.org/index.php/catalog/1755
    Explore at:
    Dataset updated
    Oct 17, 2013
    Dataset provided by
    World Health Organizationhttps://who.int/
    Authors
    World Health Organization (WHO)
    Time period covered
    2003
    Area covered
    Ukraine
    Description

    Abstract

    Different countries have different health outcomes that are in part due to the way respective health systems perform. Regardless of the type of health system, individuals will have health and non-health expectations in terms of how the institution responds to their needs. In many countries, however, health systems do not perform effectively and this is in part due to lack of information on health system performance, and on the different service providers.

    The aim of the WHO World Health Survey is to provide empirical data to the national health information systems so that there is a better monitoring of health of the people, responsiveness of health systems and measurement of health-related parameters.

    The overall aims of the survey is to examine the way populations report their health, understand how people value health states, measure the performance of health systems in relation to responsiveness and gather information on modes and extents of payment for health encounters through a nationally representative population based community survey. In addition, it addresses various areas such as health care expenditures, adult mortality, birth history, various risk factors, assessment of main chronic health conditions and the coverage of health interventions, in specific additional modules.

    The objectives of the survey programme are to: 1. develop a means of providing valid, reliable and comparable information, at low cost, to supplement the information provided by routine health information systems. 2. build the evidence base necessary for policy-makers to monitor if health systems are achieving the desired goals, and to assess if additional investment in health is achieving the desired outcomes. 3. provide policy-makers with the evidence they need to adjust their policies, strategies and programmes as necessary.

    Geographic coverage

    The survey sampling frame must cover 100% of the country's eligible population, meaning that the entire national territory must be included. This does not mean that every province or territory need be represented in the survey sample but, rather, that all must have a chance (known probability) of being included in the survey sample.

    There may be exceptional circumstances that preclude 100% national coverage. Certain areas in certain countries may be impossible to include due to reasons such as accessibility or conflict. All such exceptions must be discussed with WHO sampling experts. If any region must be excluded, it must constitute a coherent area, such as a particular province or region. For example if ¾ of region D in country X is not accessible due to war, the entire region D will be excluded from analysis.

    Analysis unit

    Households and individuals

    Universe

    The WHS will include all male and female adults (18 years of age and older) who are not out of the country during the survey period. It should be noted that this includes the population who may be institutionalized for health reasons at the time of the survey: all persons who would have fit the definition of household member at the time of their institutionalisation are included in the eligible population.

    If the randomly selected individual is institutionalized short-term (e.g. a 3-day stay at a hospital) the interviewer must return to the household when the individual will have come back to interview him/her. If the randomly selected individual is institutionalized long term (e.g. has been in a nursing home the last 8 years), the interviewer must travel to that institution to interview him/her.

    The target population includes any adult, male or female age 18 or over living in private households. Populations in group quarters, on military reservations, or in other non-household living arrangements will not be eligible for the study. People who are in an institution due to a health condition (such as a hospital, hospice, nursing home, home for the aged, etc.) at the time of the visit to the household are interviewed either in the institution or upon their return to their household if this is within a period of two weeks from the first visit to the household.

    Kind of data

    Sample survey data [ssd]

    Sampling procedure

    SAMPLING GUIDELINES FOR WHS

    Surveys in the WHS program must employ a probability sampling design. This means that every single individual in the sampling frame has a known and non-zero chance of being selected into the survey sample. While a Single Stage Random Sample is ideal if feasible, it is recognized that most sites will carry out Multi-stage Cluster Sampling.

    The WHS sampling frame should cover 100% of the eligible population in the surveyed country. This means that every eligible person in the country has a chance of being included in the survey sample. It also means that particular ethnic groups or geographical areas may not be excluded from the sampling frame.

    The sample size of the WHS in each country is 5000 persons (exceptions considered on a by-country basis). An adequate number of persons must be drawn from the sampling frame to account for an estimated amount of non-response (refusal to participate, empty houses etc.). The highest estimate of potential non-response and empty households should be used to ensure that the desired sample size is reached at the end of the survey period. This is very important because if, at the end of data collection, the required sample size of 5000 has not been reached additional persons must be selected randomly into the survey sample from the sampling frame. This is both costly and technically complicated (if this situation is to occur, consult WHO sampling experts for assistance), and best avoided by proper planning before data collection begins.

    All steps of sampling, including justification for stratification, cluster sizes, probabilities of selection, weights at each stage of selection, and the computer program used for randomization must be communicated to WHO

    STRATIFICATION

    Stratification is the process by which the population is divided into subgroups. Sampling will then be conducted separately in each subgroup. Strata or subgroups are chosen because evidence is available that they are related to the outcome (e.g. health, responsiveness, mortality, coverage etc.). The strata chosen will vary by country and reflect local conditions. Some examples of factors that can be stratified on are geography (e.g. North, Central, South), level of urbanization (e.g. urban, rural), socio-economic zones, provinces (especially if health administration is primarily under the jurisdiction of provincial authorities), or presence of health facility in area. Strata to be used must be identified by each country and the reasons for selection explicitly justified.

    Stratification is strongly recommended at the first stage of sampling. Once the strata have been chosen and justified, all stages of selection will be conducted separately in each stratum. We recommend stratifying on 3-5 factors. It is optimum to have half as many strata (note the difference between stratifying variables, which may be such variables as gender, socio-economic status, province/region etc. and strata, which are the combination of variable categories, for example Male, High socio-economic status, Xingtao Province would be a stratum).

    Strata should be as homogenous as possible within and as heterogeneous as possible between. This means that strata should be formulated in such a way that individuals belonging to a stratum should be as similar to each other with respect to key variables as possible and as different as possible from individuals belonging to a different stratum. This maximises the efficiency of stratification in reducing sampling variance.

    MULTI-STAGE CLUSTER SELECTION

    A cluster is a naturally occurring unit or grouping within the population (e.g. enumeration areas, cities, universities, provinces, hospitals etc.); it is a unit for which the administrative level has clear, nonoverlapping boundaries. Cluster sampling is useful because it avoids having to compile exhaustive lists of every single person in the population. Clusters should be as heterogeneous as possible within and as homogenous as possible between (note that this is the opposite criterion as that for strata). Clusters should be as small as possible (i.e. large administrative units such as Provinces or States are not good clusters) but not so small as to be homogenous.

    In cluster sampling, a number of clusters are randomly selected from a list of clusters. Then, either all members of the chosen cluster or a random selection from among them are included in the sample. Multistage sampling is an extension of cluster sampling where a hierarchy of clusters are chosen going from larger to smaller.

    In order to carry out multi-stage sampling, one needs to know only the population sizes of the sampling units. For the smallest sampling unit above the elementary unit however, a complete list of all elementary units (households) is needed; in order to be able to randomly select among all households in the TSU, a list of all those households is required. This information may be available from the most recent population census. If the last census was >3 years ago or the information furnished by it was of poor quality or unreliable, the survey staff will have the task of enumerating all households in the smallest randomly selected sampling unit. It is very important to budget for this step if it is necessary and ensure that all households are properly enumerated in order that a representative sample is obtained.

    It is always best to have as many clusters in the PSU as possible. The reason for this is that the fewer the number of respondents in each PSU, the lower will be the clustering effect which

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
List to Data (2025). RCS Data Russia [Dataset]. https://listtodata.com/rcs-data-russia

RCS Data Russia

Explore at:
.csv, .xls, .txtAvailable download formats
Dataset updated
Jul 17, 2025
Authors
List to Data
License

CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically

Time period covered
Jan 1, 2025 - Dec 31, 2025
Area covered
Russia
Variables measured
phone numbers, Email Address, full name, Address, City, State, gender,age,income,ip address,
Description

RCS Data Russia is an authentic dataset that you can get now. The database also comes with a replacement guarantee, meaning if any numbers are incorrect, they will be replaced. This makes sure you only get valid numbers. You won’t have to worry about old numbers, as the system automatically removes invalid data. This helps you connect with real people interested in your offers. Therefore, it makes your outreach more effective and reliable. It saves you from wasting time on dead numbers or inactive users. In addition, RCS Data Russia is the key to staying ahead in marketing. With updated information, you can always rely on accurate contacts. This helps build trust with potential customers. Invalid data can harm your campaign, but this database removes all outdated or incorrect information. The focus stays on real, active contacts. As a result, your marketing efforts will be more successful. Russia RCS data is reliable and you can easily filter by gender, age, relationship status, and location. This makes finding the right audience super simple. The data is always valid, which means you won’t waste time on incorrect numbers. You can trust the accuracy of this database. Also, 24/7 support is always available. If you have questions, there is someone ready to help anytime. With valid data, reaching out to people who match your needs becomes easy and quick. You will save time and money while getting the best results for your business. Moreover, Russia RCS data is perfect for marketers and businesses. You can create targeted campaigns with the help of this database. This ensures you reach the right people who might have an interest in your product or service. Filtering by various details like age and location helps make your campaign specific and effective.

Search
Clear search
Close search
Google apps
Main menu