100+ datasets found
  1. Measure Evaluation

    • catalog.data.gov
    • data.amerigeoss.org
    Updated Jun 8, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    data.usaid.gov (2024). Measure Evaluation [Dataset]. https://catalog.data.gov/dataset/measure-evaluation
    Explore at:
    Dataset updated
    Jun 8, 2024
    Dataset provided by
    United States Agency for International Developmenthttps://usaid.gov/
    Description

    MEASURE Evaluation is the USAID Global Health Bureau's primary vehicle for supporting improvements in monitoring and evaluation in population, health and nutrition worldwide. They help to identify data needs, collect and analyze technically sound data, and use that data for health decision making. Some MEASURE Evaluation activities involve the collection of innovative evaluation data sets in order to increase the evidence-base on program impact and evaluate the strengths and weaknesses of recent evaluation methodological developments. Many of these data sets may be available to other researchers to answer questions of particular importance to global health and evaluation research. Some of these data sets are being added to the Dataverse on a rolling basis, as they become available. This collection on the Dataverse platform contains a growing variety and number of global health evaluation datasets.

  2. d

    Python script to calculate the speed of sound and attenuation coefficient in...

    • catalogue.data.govt.nz
    Updated Feb 1, 2001
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2001). Python script to calculate the speed of sound and attenuation coefficient in air - Dataset - data.govt.nz - discover and use data [Dataset]. https://catalogue.data.govt.nz/dataset/oai-figshare-com-article-4871984
    Explore at:
    Dataset updated
    Feb 1, 2001
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This code calculates the speed of sound and frequency-dependent attenuation coefficient in air, given acoustic (laser-ultrasound) data recorded at various distances from source to detector. PALplots functions are used for visualization: https://github.com/PALab/palplots.

  3. databricks-dolly-15k

    • huggingface.co
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Databricks, databricks-dolly-15k [Dataset]. https://huggingface.co/datasets/databricks/databricks-dolly-15k
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset authored and provided by
    Databrickshttp://databricks.com/
    License

    Attribution-ShareAlike 3.0 (CC BY-SA 3.0)https://creativecommons.org/licenses/by-sa/3.0/
    License information was derived automatically

    Description

    Summary

    databricks-dolly-15k is an open source dataset of instruction-following records generated by thousands of Databricks employees in several of the behavioral categories outlined in the InstructGPT paper, including brainstorming, classification, closed QA, generation, information extraction, open QA, and summarization. This dataset can be used for any purpose, whether academic or commercial, under the terms of the Creative Commons Attribution-ShareAlike 3.0 Unported… See the full description on the dataset page: https://huggingface.co/datasets/databricks/databricks-dolly-15k.

  4. d

    Manual snow course observations, raw met data, raw snow depth observations,...

    • catalog.data.gov
    Updated Jun 15, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Climate Adaptation Science Centers (2024). Manual snow course observations, raw met data, raw snow depth observations, locations, and associated metadata for Oregon sites [Dataset]. https://catalog.data.gov/dataset/manual-snow-course-observations-raw-met-data-raw-snow-depth-observations-locations-and-ass
    Explore at:
    Dataset updated
    Jun 15, 2024
    Dataset provided by
    Climate Adaptation Science Centers
    Area covered
    Oregon
    Description

    OSU_SnowCourse Summary: Manual snow course observations were collected over WY 2012-2014 from four paired forest-open sites chosen to span a broad elevation range. Study sites were located in the upper McKenzie (McK) River watershed, approximately 100 km east of Corvallis, Oregon, on the western slope of the Cascade Range and in the Middle Fork Willamette (MFW) watershed, located to the south of the McKenzie. The sites were designated based on elevation, with a range of 1110-1480 m. Distributed snow depth and snow water equivalent (SWE) observations were collected via monthly manual snow courses from 1 November through 1 April and bi-weekly thereafter. Snow courses spanned 500 m of forested terrain and 500 m of adjacent open terrain. Snow depth observations were collected approximately every 10 m and SWE was measured every 100 m along the snow courses with a federal snow sampler. These data are raw observations and have not been quality controlled in any way. Distance along the transect was estimated in the field. OSU_SnowDepth Summary: 10-minute snow depth observations collected at OSU met stations in the upper McKenzie River Watershed and the Middle Fork Willamette Watershed during Water Years 2012-2014. Each meterological tower was deployed to represent either a forested or an open area at a particular site, and generally the locations were paired, with a meterological station deployed in the forest and in the open area at a single site. These data were collected in conjunction with manual snow course observations, and the meterological stations were located in the approximate center of each forest or open snow course transect. These data have undergone basic quality control. See manufacturer specifications for individual instruments to determine sensor accuracy. This file was compiled from individual raw data files (named "RawData.txt" within each site and year directory) provided by OSU, along with metadata of site attributes. We converted the Excel-based timestamp (seconds since origin) to a date, changed the NaN flags for missing data to NA, and added site attributes such as site name and cover. We replaced positive values with NA, since snow depth values in raw data are negative (i.e., flipped, with some correction to use the height of the sensor as zero). Thus, positive snow depth values in the raw data equal negative snow depth values. Second, the sign of the data was switched to make them positive. Then, the smooth.m (MATLAB) function was used to roughly smooth the data, with a moving window of 50 points. Third, outliers were removed. All values higher than the smoothed values +10, were replaced with NA. In some cases, further single point outliers were removed. OSU_Met Summary: Raw, 10-minute meteorological observations collected at OSU met stations in the upper McKenzie River Watershed and the Middle Fork Willamette Watershed during Water Years 2012-2014. Each meterological tower was deployed to represent either a forested or an open area at a particular site, and generally the locations were paired, with a meterological station deployed in the forest and in the open area at a single site. These data were collected in conjunction with manual snow course observations, and the meteorological stations were located in the approximate center of each forest or open snow course transect. These stations were deployed to collect numerous meteorological variables, of which snow depth and wind speed are included here. These data are raw datalogger output and have not been quality controlled in any way. See manufacturer specifications for individual instruments to determine sensor accuracy. This file was compiled from individual raw data files (named "RawData.txt" within each site and year directory) provided by OSU, along with metadata of site attributes. We converted the Excel-based timestamp (seconds since origin) to a date, changed the NaN and 7999 flags for missing data to NA, and added site attributes such as site name and cover. OSU_Location Summary: Location Metadata for manual snow course observations and meteorological sensors. These data are compiled from GPS data for which the horizontal accuracy is unknown, and from processed hemispherical photographs. They have not been quality controlled in any way.

  5. GEOSPATIAL DATA Progress Needed on Identifying Expenditures, Building and...

    • hub.arcgis.com
    Updated Jun 11, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    GeoPlatform ArcGIS Online (2024). GEOSPATIAL DATA Progress Needed on Identifying Expenditures, Building and Utilizing a Data Infrastructure, and Reducing Duplicative Efforts [Dataset]. https://hub.arcgis.com/documents/c0cef9e4901143cbb9f15ddbb49ca3b4
    Explore at:
    Dataset updated
    Jun 11, 2024
    Dataset provided by
    Authors
    GeoPlatform ArcGIS Online
    Description

    Progress Needed on Identifying Expenditures, Building and Utilizing a Data Infrastructure, and Reducing Duplicative Efforts The federal government collects, maintains, and uses geospatial information—data linked to specific geographic locations—to help support varied missions, including national security and natural resources conservation. To coordinate geospatial activities, in 1994 the President issued an executive order to develop a National Spatial Data Infrastructure—a framework for coordination that includes standards, data themes, and a clearinghouse. GAO was asked to review federal and state coordination of geospatial data. GAO’s objectives were to (1) describe the geospatial data that selected federal agencies and states use and how much is spent on geospatial data; (2) assess progress in establishing the National Spatial Data Infrastructure; and (3) determine whether selected federal agencies and states invest in duplicative geospatial data. To do so, GAO identified federal and state uses of geospatial data; evaluated available cost data from 2013 to 2015; assessed FGDC’s and selected agencies’ efforts to establish the infrastructure; and analyzed federal and state datasets to identify duplication. What GAO Found Federal agencies and state governments use a variety of geospatial datasets to support their missions. For example, after Hurricane Sandy in 2012, the Federal Emergency Management Agency used geospatial data to identify 44,000 households that were damaged and inaccessible and reported that, as a result, it was able to provide expedited assistance to area residents. Federal agencies report spending billions of dollars on geospatial investments; however, the estimates are understated because agencies do not always track geospatial investments. For example, these estimates do not include billions of dollars spent on earth-observing satellites that produce volumes of geospatial data. The Federal Geographic Data Committee (FGDC) and the Office of Management and Budget (OMB) have started an initiative to have agencies identify and report annually on geospatial-related investments as part of the fiscal year 2017 budget process. FGDC and selected federal agencies have made progress in implementing their responsibilities for the National Spatial Data Infrastructure as outlined in OMB guidance; however, critical items remain incomplete. For example, the committee established a clearinghouse for records on geospatial data, but the clearinghouse lacks an effective search capability and performance monitoring. FGDC also initiated plans and activities for coordinating with state governments on the collection of geospatial data; however, state officials GAO contacted are generally not satisfied with the committee’s efforts to coordinate with them. Among other reasons, they feel that the committee is focused on a federal perspective rather than a national one, and that state recommendations are often ignored. In addition, selected agencies have made limited progress in their own strategic planning efforts and in using the clearinghouse to register their data to ensure they do not invest in duplicative data. For example, 8 of the committee’s 32 member agencies have begun to register their data on the clearinghouse, and they have registered 59 percent of the geospatial data they deemed critical. Part of the reason that agencies are not fulfilling their responsibilities is that OMB has not made it a priority to oversee these efforts. Until OMB ensures that FGDC and federal agencies fully implement their responsibilities, the vision of improving the coordination of geospatial information and reducing duplicative investments will not be fully realized. OMB guidance calls for agencies to eliminate duplication, avoid redundant expenditures, and improve the efficiency and effectiveness of the sharing and dissemination of geospatial data. However, some data are collected multiple times by federal, state, and local entities, resulting in duplication in effort and resources. A new initiative to create a national address database could potentially result in significant savings for federal, state, and local governments. However, agencies face challenges in effectively coordinating address data collection efforts, including statutory restrictions on sharing certain federal address data. Until there is effective coordination across the National Spatial Data Infrastructure, there will continue to be duplicative efforts to obtain and maintain these data at every level of government.https://www.gao.gov/assets/d15193.pdfWhat GAO Recommends GAO suggests that Congress consider assessing statutory limitations on address data to foster progress toward a national address database. GAO also recommends that OMB improve its oversight of FGDC and federal agency initiatives, and that FGDC and selected agencies fully implement initiatives. The agencies generally agreed with the recommendations and identified plans to implement them.

  6. Z

    CT-FAN: A Multilingual dataset for Fake News Detection

    • data.niaid.nih.gov
    • explore.openaire.eu
    • +1more
    Updated Oct 23, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Thomas Mandl (2022). CT-FAN: A Multilingual dataset for Fake News Detection [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_4714516
    Explore at:
    Dataset updated
    Oct 23, 2022
    Dataset provided by
    Julia Maria Struß
    Juliane Köhler
    Thomas Mandl
    Melanie Siegel
    Michael Wiegand
    Gautam Kishore Shahi
    Description

    By downloading the data, you agree with the terms & conditions mentioned below:

    Data Access: The data in the research collection may only be used for research purposes. Portions of the data are copyrighted and have commercial value as data, so you must be careful to use them only for research purposes.

    Summaries, analyses and interpretations of the linguistic properties of the information may be derived and published, provided it is impossible to reconstruct the information from these summaries. You may not try identifying the individuals whose texts are included in this dataset. You may not try to identify the original entry on the fact-checking site. You are not permitted to publish any portion of the dataset besides summary statistics or share it with anyone else.

    We grant you the right to access the collection's content as described in this agreement. You may not otherwise make unauthorised commercial use of, reproduce, prepare derivative works, distribute copies, perform, or publicly display the collection or parts of it. You are responsible for keeping and storing the data in a way that others cannot access. The data is provided free of charge.

    Citation

    Please cite our work as

    @InProceedings{clef-checkthat:2022:task3, author = {K{"o}hler, Juliane and Shahi, Gautam Kishore and Stru{\ss}, Julia Maria and Wiegand, Michael and Siegel, Melanie and Mandl, Thomas}, title = "Overview of the {CLEF}-2022 {CheckThat}! Lab Task 3 on Fake News Detection", year = {2022}, booktitle = "Working Notes of CLEF 2022---Conference and Labs of the Evaluation Forum", series = {CLEF~'2022}, address = {Bologna, Italy},}

    @article{shahi2021overview, title={Overview of the CLEF-2021 CheckThat! lab task 3 on fake news detection}, author={Shahi, Gautam Kishore and Stru{\ss}, Julia Maria and Mandl, Thomas}, journal={Working Notes of CLEF}, year={2021} }

    Problem Definition: Given the text of a news article, determine whether the main claim made in the article is true, partially true, false, or other (e.g., claims in dispute) and detect the topical domain of the article. This task will run in English and German.

    Task 3: Multi-class fake news detection of news articles (English) Sub-task A would detect fake news designed as a four-class classification problem. Given the text of a news article, determine whether the main claim made in the article is true, partially true, false, or other. The training data will be released in batches and roughly about 1264 articles with the respective label in English language. Our definitions for the categories are as follows:

    False - The main claim made in an article is untrue.

    Partially False - The main claim of an article is a mixture of true and false information. The article contains partially true and partially false information but cannot be considered 100% true. It includes all articles in categories like partially false, partially true, mostly true, miscaptioned, misleading etc., as defined by different fact-checking services.

    True - This rating indicates that the primary elements of the main claim are demonstrably true.

    Other- An article that cannot be categorised as true, false, or partially false due to a lack of evidence about its claims. This category includes articles in dispute and unproven articles.

    Cross-Lingual Task (German)

    Along with the multi-class task for the English language, we have introduced a task for low-resourced language. We will provide the data for the test in the German language. The idea of the task is to use the English data and the concept of transfer to build a classification model for the German language.

    Input Data

    The data will be provided in the format of Id, title, text, rating, the domain; the description of the columns is as follows:

    ID- Unique identifier of the news article

    Title- Title of the news article

    text- Text mentioned inside the news article

    our rating - class of the news article as false, partially false, true, other

    Output data format

    public_id- Unique identifier of the news article

    predicted_rating- predicted class

    Sample File

    public_id, predicted_rating 1, false 2, true

    IMPORTANT!

    We have used the data from 2010 to 2022, and the content of fake news is mixed up with several topics like elections, COVID-19 etc.

    Baseline: For this task, we have created a baseline system. The baseline system can be found at https://zenodo.org/record/6362498

    Related Work

    Shahi GK. AMUSED: An Annotation Framework of Multi-modal Social Media Data. arXiv preprint arXiv:2010.00502. 2020 Oct 1.https://arxiv.org/pdf/2010.00502.pdf

    G. K. Shahi and D. Nandini, “FakeCovid – a multilingual cross-domain fact check news dataset for covid-19,” in workshop Proceedings of the 14th International AAAI Conference on Web and Social Media, 2020. http://workshop-proceedings.icwsm.org/abstract?id=2020_14

    Shahi, G. K., Dirkson, A., & Majchrzak, T. A. (2021). An exploratory study of covid-19 misinformation on twitter. Online Social Networks and Media, 22, 100104. doi: 10.1016/j.osnem.2020.100104

    Shahi, G. K., Struß, J. M., & Mandl, T. (2021). Overview of the CLEF-2021 CheckThat! lab task 3 on fake news detection. Working Notes of CLEF.

    Nakov, P., Da San Martino, G., Elsayed, T., Barrón-Cedeno, A., Míguez, R., Shaar, S., ... & Mandl, T. (2021, March). The CLEF-2021 CheckThat! lab on detecting check-worthy claims, previously fact-checked claims, and fake news. In European Conference on Information Retrieval (pp. 639-649). Springer, Cham.

    Nakov, P., Da San Martino, G., Elsayed, T., Barrón-Cedeño, A., Míguez, R., Shaar, S., ... & Kartal, Y. S. (2021, September). Overview of the CLEF–2021 CheckThat! Lab on Detecting Check-Worthy Claims, Previously Fact-Checked Claims, and Fake News. In International Conference of the Cross-Language Evaluation Forum for European Languages (pp. 264-291). Springer, Cham.

  7. ChatQA-Training-Data

    • huggingface.co
    Updated Jun 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NVIDIA (2023). ChatQA-Training-Data [Dataset]. https://huggingface.co/datasets/nvidia/ChatQA-Training-Data
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jun 30, 2023
    Dataset provided by
    Nvidiahttp://nvidia.com/
    Authors
    NVIDIA
    License

    https://choosealicense.com/licenses/other/https://choosealicense.com/licenses/other/

    Description

    Data Description

    We release the training dataset of ChatQA. It is built and derived from existing datasets: DROP, NarrativeQA, NewsQA, Quoref, ROPES, SQuAD1.1, SQuAD2.0, TAT-QA, a SFT dataset, as well as a our synthetic conversational QA dataset by GPT-3.5-turbo-0613. The SFT dataset is built and derived from: Soda, ELI5, FLAN, the FLAN collection, Self-Instruct, Unnatural Instructions, OpenAssistant, and Dolly. For more information about ChatQA, check the website!

      Other… See the full description on the dataset page: https://huggingface.co/datasets/nvidia/ChatQA-Training-Data.
    
  8. h

    Mental Health & Learning Disabilities Dataset v 1 (Sensitive) Records

    • healthdatagateway.org
    • find.data.gov.scot
    • +1more
    unknown
    Updated Oct 8, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Mental Health & Learning Disabilities Dataset v 1 (Sensitive) Records [Dataset]. https://healthdatagateway.org/en/dataset/853
    Explore at:
    unknownAvailable download formats
    Dataset updated
    Oct 8, 2024
    License

    https://digital.nhs.uk/binaries/content/assets/website-assets/services/dars/nhs_digital_approved_edition_2_dsa_demo.pdfhttps://digital.nhs.uk/binaries/content/assets/website-assets/services/dars/nhs_digital_approved_edition_2_dsa_demo.pdf

    Description

    The Mental Health and Learning Disabilities Data Set version 1 (Record Level - sensitive data inclusion). The Mental Health Minimum Data Set was superseded by the Mental Health and Learning Disabilities Data Set, which in turn was superseded by the Mental Health Services Data Set. The Mental Health and Learning Disabilities Data Set collected data from the health records of individual children, young people and adults who were in contact with mental health services.

  9. Global Data Regulation Diagnostic Survey Dataset 2021 - Afghanistan, Angola,...

    • microdata.worldbank.org
    • catalog.ihsn.org
    • +1more
    Updated Oct 26, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    World Bank (2023). Global Data Regulation Diagnostic Survey Dataset 2021 - Afghanistan, Angola, Argentina...and 77 more [Dataset]. https://microdata.worldbank.org/index.php/catalog/3866
    Explore at:
    Dataset updated
    Oct 26, 2023
    Dataset authored and provided by
    World Bankhttp://worldbank.org/
    Time period covered
    2020
    Area covered
    Argentina...and 77 more, Angola, Afghanistan
    Description

    Abstract

    The Global Data Regulation Diagnostic provides a comprehensive assessment of the quality of the data governance environment. Diagnostic results show that countries have put in greater effort in adopting enabler regulatory practices than in safeguard regulatory practices. However, for public intent data, enablers for private intent data, safeguards for personal and nonpersonal data, cybersecurity and cybercrime, as well as cross-border data flows. Across all these dimensions, no income group demonstrates advanced regulatory frameworks across all dimensions, indicating significant room for the regulatory development of both enablers and safeguards remains at an intermediate stage: 47 percent of enabler good practices and 41 percent of good safeguard practices are adopted across countries. Under the enabler and safeguard pillars, the diagnostic covers dimensions of e-commerce/e-transactions, enablers further improvement on data governance environment.

    The Global Data Regulation Diagnostic is the first comprehensive assessment of laws and regulations on data governance. It covers enabler and safeguard regulatory practices in 80 countries providing indicators to assess and compare their performance. This Global Data Regulation Diagnostic develops objective and standardized indicators to measure the regulatory environment for the data economy across countries. The indicators aim to serve as a diagnostic tool so countries can assess and compare their performance vis-á-vis other countries. Understanding the gap with global regulatory good practices is a necessary first step for governments when identifying and prioritizing reforms.

    Geographic coverage

    80 countries

    Analysis unit

    Country

    Kind of data

    Observation data/ratings [obs]

    Sampling procedure

    The diagnostic is based on a detailed assessment of domestic laws, regulations, and administrative requirements in 80 countries selected to ensure a balanced coverage across income groups, regions, and different levels of digital technology development. Data are further verified through a detailed desk research of legal texts, reflecting the regulatory status of each country as of June 1, 2020.

    Mode of data collection

    Mail Questionnaire [mail]

    Research instrument

    The questionnaire comprises 37 questions designed to determine if a country has adopted good regulatory practice on data governance. The responses are then scored and assigned a normative interpretation. Related questions fall into seven clusters so that when the scores are averaged, each cluster provides an overall sense of how it performs in its corresponding regulatory and legal dimensions. These seven dimensions are: (1) E-commerce/e-transaction; (2) Enablers for public intent data; (3) Enablers for private intent data; (4) Safeguards for personal data; (5) Safeguards for nonpersonal data; (6) Cybersecurity and cybercrime; (7) Cross-border data transfers.

    Response rate

    100%

  10. stack-exchange-preferences

    • huggingface.co
    • opendatalab.com
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hugging Face H4, stack-exchange-preferences [Dataset]. https://huggingface.co/datasets/HuggingFaceH4/stack-exchange-preferences
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset provided by
    Hugging Facehttps://huggingface.co/
    Authors
    Hugging Face H4
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    Dataset Card for H4 Stack Exchange Preferences Dataset

      Dataset Summary
    

    This dataset contains questions and answers from the Stack Overflow Data Dump for the purpose of preference model training. Importantly, the questions have been filtered to fit the following criteria for preference models (following closely from Askell et al. 2021): have >=2 answers. This data could also be used for instruction fine-tuning and language model training. The questions are grouped with… See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceH4/stack-exchange-preferences.

  11. N

    Taunton, MA Population Breakdown by Gender Dataset: Male and Female...

    • neilsberg.com
    csv, json
    Updated Feb 24, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Neilsberg Research (2025). Taunton, MA Population Breakdown by Gender Dataset: Male and Female Population Distribution // 2025 Edition [Dataset]. https://www.neilsberg.com/research/datasets/b25710a1-f25d-11ef-8c1b-3860777c1fe6/
    Explore at:
    json, csvAvailable download formats
    Dataset updated
    Feb 24, 2025
    Dataset authored and provided by
    Neilsberg Research
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Massachusetts, Taunton
    Variables measured
    Male Population, Female Population, Male Population as Percent of Total Population, Female Population as Percent of Total Population
    Measurement technique
    The data presented in this dataset is derived from the latest U.S. Census Bureau American Community Survey (ACS) 2019-2023 5-Year Estimates. To measure the two variables, namely (a) population and (b) population as a percentage of the total population, we initially analyzed and categorized the data for each of the gender classifications (biological sex) reported by the US Census Bureau. For further information regarding these estimates, please feel free to reach out to us via email at research@neilsberg.com.
    Dataset funded by
    Neilsberg Research
    Description
    About this dataset

    Context

    The dataset tabulates the population of Taunton by gender, including both male and female populations. This dataset can be utilized to understand the population distribution of Taunton across both sexes and to determine which sex constitutes the majority.

    Key observations

    There is a slight majority of female population, with 52.17% of total population being female. Source: U.S. Census Bureau American Community Survey (ACS) 2019-2023 5-Year Estimates.

    Content

    When available, the data consists of estimates from the U.S. Census Bureau American Community Survey (ACS) 2019-2023 5-Year Estimates.

    Scope of gender :

    Please note that American Community Survey asks a question about the respondents current sex, but not about gender, sexual orientation, or sex at birth. The question is intended to capture data for biological sex, not gender. Respondents are supposed to respond with the answer as either of Male or Female. Our research and this dataset mirrors the data reported as Male and Female for gender distribution analysis. No further analysis is done on the data reported from the Census Bureau.

    Variables / Data Columns

    • Gender: This column displays the Gender (Male / Female)
    • Population: The population of the gender in the Taunton is shown in this column.
    • % of Total Population: This column displays the percentage distribution of each gender as a proportion of Taunton total population. Please note that the sum of all percentages may not equal one due to rounding of values.

    Good to know

    Margin of Error

    Data in the dataset are based on the estimates and are subject to sampling variability and thus a margin of error. Neilsberg Research recommends using caution when presening these estimates in your research.

    Custom data

    If you do need custom data for any of your research project, report or presentation, you can contact our research staff at research@neilsberg.com for a feasibility of a custom tabulation on a fee-for-service basis.

    Inspiration

    Neilsberg Research Team curates, analyze and publishes demographics and economic data from a variety of public and proprietary sources, each of which often includes multiple surveys and programs. The large majority of Neilsberg Research aggregated datasets and insights is made available for free download at https://www.neilsberg.com/research/.

    Recommended for further research

    This dataset is a part of the main dataset for Taunton Population by Race & Ethnicity. You can refer the same here

  12. News Events Data in Asia ( Techsalerator)

    • datarade.ai
    Updated Jul 9, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Techsalerator (2024). News Events Data in Asia ( Techsalerator) [Dataset]. https://datarade.ai/data-products/news-events-data-in-asia-techsalerator-techsalerator
    Explore at:
    .json, .csv, .xls, .txtAvailable download formats
    Dataset updated
    Jul 9, 2024
    Dataset provided by
    Techsalerator LLC
    Authors
    Techsalerator
    Area covered
    Kyrgyzstan, Timor-Leste, United Arab Emirates, China, Maldives, Brunei Darussalam, Kazakhstan, Uzbekistan, Hong Kong, Iran (Islamic Republic of)
    Description

    Techsalerator’s News Event Data in Asia offers a detailed and expansive dataset designed to provide businesses, analysts, journalists, and researchers with comprehensive insights into significant news events across the Asian continent. This dataset captures and categorizes major events reported from a diverse range of news sources, including press releases, industry news sites, blogs, and PR platforms, offering valuable perspectives on regional developments, economic shifts, political changes, and cultural occurrences.

    Key Features of the Dataset: Extensive Coverage:

    The dataset aggregates news events from a wide range of sources such as company press releases, industry-specific news outlets, blogs, PR sites, and traditional media. This broad coverage ensures a diverse array of information from multiple reporting channels. Categorization of Events:

    News events are categorized into various types including business and economic updates, political developments, technological advancements, legal and regulatory changes, and cultural events. This categorization helps users quickly find and analyze information relevant to their interests or sectors. Real-Time Updates:

    The dataset is updated regularly to include the most current events, ensuring users have access to the latest news and can stay informed about recent developments as they happen. Geographic Segmentation:

    Events are tagged with their respective countries and regions within Asia. This geographic segmentation allows users to filter and analyze news events based on specific locations, facilitating targeted research and analysis. Event Details:

    Each event entry includes comprehensive details such as the date of occurrence, source of the news, a description of the event, and relevant keywords. This thorough detailing helps users understand the context and significance of each event. Historical Data:

    The dataset includes historical news event data, enabling users to track trends and perform comparative analysis over time. This feature supports longitudinal studies and provides insights into the evolution of news events. Advanced Search and Filter Options:

    Users can search and filter news events based on criteria such as date range, event type, location, and keywords. This functionality allows for precise and efficient retrieval of relevant information. Asian Countries and Territories Covered: Central Asia: Kazakhstan Kyrgyzstan Tajikistan Turkmenistan Uzbekistan East Asia: China Hong Kong (Special Administrative Region of China) Japan Mongolia North Korea South Korea Taiwan South Asia: Afghanistan Bangladesh Bhutan India Maldives Nepal Pakistan Sri Lanka Southeast Asia: Brunei Cambodia East Timor (Timor-Leste) Indonesia Laos Malaysia Myanmar (Burma) Philippines Singapore Thailand Vietnam Western Asia (Middle East): Armenia Azerbaijan Bahrain Cyprus Georgia Iraq Israel Jordan Kuwait Lebanon Oman Palestine Qatar Saudi Arabia Syria Turkey (partly in Europe, but often included in Asia contextually) United Arab Emirates Yemen Benefits of the Dataset: Strategic Insights: Businesses and analysts can use the dataset to gain insights into significant regional developments, economic conditions, and political changes, aiding in strategic decision-making and market analysis. Market and Industry Trends: The dataset provides valuable information on industry-specific trends and events, helping users understand market dynamics and identify emerging opportunities. Media and PR Monitoring: Journalists and PR professionals can track relevant news across Asia, enabling them to monitor media coverage, identify emerging stories, and manage public relations efforts effectively. Academic and Research Use: Researchers can utilize the dataset for longitudinal studies, trend analysis, and academic research on various topics related to Asian news and events. Techsalerator’s News Event Data in Asia is a crucial resource for accessing and analyzing significant news events across the continent. By offering detailed, categorized, and up-to-date information, it supports effective decision-making, research, and media monitoring across diverse sectors.

  13. i

    data set for open-loop solution for a stochastic problem

    • ieee-dataport.org
    Updated Apr 12, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Oscar Salviano (2023). data set for open-loop solution for a stochastic problem [Dataset]. http://doi.org/10.21227/815r-6d66
    Explore at:
    Dataset updated
    Apr 12, 2023
    Dataset provided by
    IEEE Dataport
    Authors
    Oscar Salviano
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    the focus of this dataset is to provid an open-loop solution for a stochastic problem with imperfect state information andchance-constraints adjusted by an optimal gain.

  14. i

    Labeled Image Datasets for AI & Computer Vision

    • images.cv
    Updated Apr 26, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Images.cv (2024). Labeled Image Datasets for AI & Computer Vision [Dataset]. https://images.cv/
    Explore at:
    Dataset updated
    Apr 26, 2024
    Dataset provided by
    Images.cv
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Explore and download labeled image datasets for AI, ML, and computer vision. Find datasets for object detection, image classification, and image segmentation.

  15. Short Term Rentals

    • ckan-dcdev.hub.arcgis.com
    • address-opioid-addiction-bw-1-dcdev.hub.arcgis.com
    Updated Feb 14, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    ESRI R&D Center (2019). Short Term Rentals [Dataset]. https://ckan-dcdev.hub.arcgis.com/maps/b381b0a0350843c4a47477926e1bffd7
    Explore at:
    Dataset updated
    Feb 14, 2019
    Dataset provided by
    Esrihttp://esri.com/
    Authors
    ESRI R&D Center
    Description

    Direct link: Short-Term Rental Eligibility Dataset

    DATASET CONTEXT

    Boston's ordinance on short-term rentals is designed to incorporate the growth of the home-share industry into the City's work to create affordable housing for all residents. We want to preserve housing for residents while allowing Bostonians to benefit from this new industry. Starting on on January 1, 2019, short-term rentals in Boston will need to register with the City of Boston.

    Eligibility for every unit in the City of Boston is dependant on the following six criteria:

    • No affordability covenant restrictions
    • Compliance with housing laws and codes
    • No violations of laws regarding short-term rental use
    • Owner occupied
    • Two- or three-family dwelling
    • Residential use classification

    The Short-Term Rental Eligibility Dataset leverages information, wherever possible, about these criteria. For additional details and information about these criteria, please visit https://www.boston.gov/short-term-rentals.

    ABOUT THIS DATASET

    ATTENTION: The Short-Term Rental Eligibility Dataset is now available for residents and landlords to determine their registration eligibility.

    NOTE: These data are refreshed on a nightly basis.

    In June 2018, a citywide ordinance established new guidelines and regulations for short-term rentals in Boston. Registration opened January 1, 2019. The Short-Term Rental Eligibility Dataset was created to help residents, landlords, and City officials determine whether a property is eligible to be registered as a short-term rental.

    The Short-Term Rental Eligibility Dataset currently joins data from the following datasets:

    HOW TO DETERMINE ELIGIBILITY FOR SHORT-TERM RENTAL REGISTRATION

    1. ** Open** the Short-Term Rental Eligibility Dataset. In the dataset's search bar, enter the address of the property you are seeking to register.

    2. Find the row containing the correct address and unit of the property you are seeking. This is the information we have for your unit.

    3. Look at the columns marked as “Home-Share Eligible,” “Limited-Share Eligible,” and “Owner-Adjacent Eligible.”

      A “yes” under any of these columns means your unit IS eligible for registration under that short-term rental type. Click here for a description of short-term rental types.

      A “no” under any of these columns means your unit is NOT eligible for registration under that short-term rental type. Click here for a description of short-term rental types.

    4. If your unit has a “yes” under “Home-Share Eligible,” “Limited-Share Eligible,” or “Owner-Adjacent Eligible,” you can register your unit here.

    WHY IS MY UNIT LISTED AS “NOT ELIGIBLE”?

    If you find that your unit is listed as NOT eligible, and you would like to understand more about why, you can use the Short-Term Rental Eligibility Dataset to learn more. The following columns measure each of the six eligibility criteria in the following ways:

    1. No affordability covenant restrictions

      • A “yes” in the “Income Restricted” column tells you that the unit is marked as income restricted and is NOT eligible.

    The “Income Restricted” column measures whether the unit is subject to an affordability covenant, as reported by the Department of Neighborhood Development and/or the Boston Planning and Development Agency.
    For questions about affordability covenants, contact the Department of Neighborhood Development.

    1. Compliance with housing laws and codes

      • A “yes” in the “Problem Properties” column tells you that this unit is considered a “Problem Property” by the Problem Properties Task Force and is NOT eligible.

    Learn more about how “Problem Properties” are defined here.

    * A **“yes”** in the **“Problem Property Owner”** column tells you that the owner of this unit also owns a “Problem Property,” as reported by the Problem Properties Task Force. 
    

    Owners with any properties designated as a Problem Property are NOT eligible.

    No unit owned by the owner of a “Problem Property” may register a short-term rental.
    Learn more about how “Problem Properties” are defined here.

    * The **“Open Violation Count”** column tells you how many open violations the unit has. Units with **any open** violations are NOT eligible. Violations counted include: violations of the sanitary, building, zoning, and fire code; stop work orders; and abatement orders. 
    

    NOTE: Violations written before 1/1/19 that are still open will make a unit NOT eligible until these violations are resolved.
    If your unit has an open violation, visit these links to appeal your violation(s) or pay your code violation fine(s).

    * The **“Violations in the Last 6 Months”** column tells you how many violations the unit has received in the last six months. Units with **three or more** violations, whether open or closed, are NOT eligible. 
    

    NOTE: Only violations written on or after 1/1/19 will count against this criteria.
    If your unit has an open violation, visit these links to appeal your violation(s) or pay your code violation fine(s).

    How to comply with housing laws and codes:
    Have an open violation? Visit these links to appeal your violation(s) or pay your code violation fine(s).
    Have questions about problem properties? Visit Neighborhood Service’s Problem Properties site.
    a legal restriction that prohibits the use of the unit as a Short-Term Rental under condominium bylaws.
    Units with legal restrictions found upon investigation are NOT eligible.

    If the investigation of a complaint against the unit yields restrictions of the nature detailed above, we will mark the unit with a “yes” in this column. Until such complaint-based investigations begin, all units are marked with “no.”
    NOTE: Currently no units have a “legally restricted” designation.
    Limited-Share
    If you are the owner-occupant of a unit and you have not filed for Residential Tax Exemption, you can still register your unit by proving owner-occupancy. It is recommended that you submit proof of residency in your short-term rental registration application to expedite the process of proving owner-occupancy (see “Primary Residence Evidence” section).

    * **“Building Owner-Occupied”** measures whether the building has a single owner AND is owner occupied. A “no” in this column indicates that the unit is NOT eligible for an owner-adjacent short-term rental. 
    

    If you believe your building occupancy data is incorrect, please contact the Assessing Department.

    1. Two- or three-family dwelling

      • The “Units in Building” column tells you how many units are in the building. Owner-Adjacent units are only allowed in two- to three-family buildings; therefore, four or more units in this column will mark the unit as NOT eligible for an Owner-Adjacent Short-Term Rental.

      • A “no” in the “Building Single Owner” column tells you that the owner of this unit does not own the entire building and is NOT eligible for an Owner-Adjacent Short-Term Rental.

      If you believe your building occupancy data is incorrect, please contact the Assessing Department.
      R4

      If you believe your building occupancy data is incorrect, please contact the Assessing Department.

    Visit this site for more information on unit eligibility criteria.

  16. P

    Only Time Will Tell Dataset

    • paperswithcode.com
    Updated Oct 13, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2021). Only Time Will Tell Dataset [Dataset]. https://paperswithcode.com/dataset/only-time-will-tell
    Explore at:
    Dataset updated
    Oct 13, 2021
    Description

    Simulation results of time-respecting and time-ignoring horizon of code review network at Microsoft as JSON. For further details, please look at https://github.com/michaeldorner/only-time-will-tell

  17. s

    Determining thermal stress using indices: sea surface temperature

    • palau-data.sprep.org
    • solomonislands-data.sprep.org
    • +9more
    pdf
    Updated Feb 15, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    School of Marine Science (2022). Determining thermal stress using indices: sea surface temperature [Dataset]. https://palau-data.sprep.org/dataset/determining-thermal-stress-using-indices-sea-surface-temperature
    Explore at:
    pdf(174101)Available download formats
    Dataset updated
    Feb 15, 2022
    Dataset provided by
    School of Marine Science
    License

    Public Domain Mark 1.0https://creativecommons.org/publicdomain/mark/1.0/
    License information was derived automatically

    Area covered
    Pacific Region
    Description

    Sea temperatures in many tropical regions have increased by almost 1°C over the past 100 years and are currently increasing at 1 ~ 2°C per century. Satellite and compiled in situ observations of sea surface temperatures have greatly increased the ability to detect anomalous and persistent warm water and are being widely used to predict climate change, coral bleaching and mortality. In my study I attempted to measure in situ sea surface temperature using vemco water prob loggers. I used three indices: sea surface anomalies, degree heating days and heating rate to determine thermal stress on a reef flat. I identified the indices sea surface temperature anomalies provide significant data to determine heating is accruing on a reef and just mean monthly temperature data of a reef is not sufficient enough to indicate that the reef is heating and result in bleaching. Accumulated heat stress represented by exposure time and temperature (DHD) allows for-casting of bleaching severity. The cumulative thermal stress graph in my study indicates that after 120 degree heating days the thermal stress kep increasing on the reef for at least 3 more weeks m cooler month hence its vital to note temperature even after summer months. Available online Call Number: [EL] Physical Description: 6 Pages

  18. e

    Inspire data set BPL “Rosenhag I (plan of origin)”

    • data.europa.eu
    • gimi9.com
    wfs
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Inspire data set BPL “Rosenhag I (plan of origin)” [Dataset]. https://data.europa.eu/data/datasets/90684aec-d526-4f2b-ba38-e5b7f3f6e616
    Explore at:
    wfsAvailable download formats
    Description

    According to INSPIRE transformed development plan “Rosenhag I (Plan of Origin)” of the Great County City of Waghäusel based on an XPlanung dataset in version 5.0.

  19. r

    GIP AssetList Database v1.2 20150130

    • researchdata.edu.au
    • data.gov.au
    • +2more
    Updated Mar 30, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bioregional Assessment Program (2016). GIP AssetList Database v1.2 20150130 [Dataset]. https://researchdata.edu.au/gip-assetlist-database-v12-20150130/2986327
    Explore at:
    Dataset updated
    Mar 30, 2016
    Dataset provided by
    data.gov.au
    Authors
    Bioregional Assessment Program
    Description

    Abstract

    \[x\[This dataset was superseded by GIP AssetList Database v1.3 20150212

    GUID: e0a8bc96-e97b-44d4-858e-abbb06ddd87f

    on 12/2/2015\]x\]

    The dataset was derived by the Bioregional Assessment Programme from multiple source datasets. The source datasets are identified in the Lineage field in this metadata statement. The processes undertaken to produce this derived dataset are described in the History field in this metadata statement.

    This dataset contains the spatial and non-spatial (attribute) components of the Gippsland bioregion Asset List as two .mdb files, which are readable as an MS Access database or as an ESRI Personal Geodatabase.

    Under the BA program, a spatial assets database is developed for each defined bioregional assessment project. The spatial elements that underpin the identification of water dependent assets are identified in the first instance by regional NRM organisations (via the WAIT tool) and supplemented with additional elements from national and state/territory government datasets. All reports received associated with the WAIT process for Gippsland are included in the zip file as part of this dataset.

    Elements are initially included in the preliminary assets database if they are partly or wholly within the bioregion's preliminary assessment extent (Materiality Test 1, M1). Elements are then grouped into assets which are evaluated by project teams to determine whether they meet the second Materiality Test (M2). Assets meeting both Materiality Tests comprise the water dependent asset list. Descriptions of the assets identified in the Gippsland bioregion are found in the "AssetList" table of the database. In this version of the database only M1 has been assessed.

    Assets are the spatial features used by project teams to model scenarios under the BA program. Detailed attribution does not exist at the asset level. Asset attribution includes only the core set of BA-derived attributes reflecting the BA classification hierarchy, as described in Appendix A of "AssetList_database_GIP_v1p2_20150130.doc", located in the zip file as part of this dataset.

    The "Element_to_Asset" table contains the relationships and identifies the elements that were grouped to create each asset.

    Detailed information describing the database structure and content can be found in the document "AssetList_database_GIP_v1p2_20150130.doc" located in the zip file.

    Some of the source data used in the compilation of this dataset is restricted.

    Purpose

    \[x\[\\\\\THIS IS NOT THE CURRENT ASSET LIST\\\\\

    This dataset was superseded by GIP AssetList Database v1.3 20150212

    GUID: e0a8bc96-e97b-44d4-858e-abbb06ddd87f

    on 12/2/2015

    THIS DATASET IS NOT TO BE PUBLISHED IN ITS CURRENT FORM\]x\]

    Dataset History

    This dataset is an update of the previous version of the Gippsland asset list database: "Gippsland Asset List V1 20141210"; ID: 112883f7-1440-4912-8fc3-1daf63e802cb, which was updated with the inclusion of a number of additional datasets from the Victorian Department of the Environment and Primary Industries as identified in the "linkages" section and below.

    Victorian Farm Dam Boundaries

    https://data.bioregionalassessments.gov.au/datastore/dataset/311a47f9-206d-4601-aa7d-6739cfc06d61

    Flood Extent 100 year extent West Gippsland Catchment Management Authority GIP v140701

    https://data.bioregionalassessments.gov.au/dataset/2ff06a4f-fdd5-4a34-b29a-a49416e94f15

    Irrigation District Department of Environment and Primary Industries GIP

    https://data.bioregionalassessments.gov.au/datastore/dataset/880d9042-abe7-4669-be3a-e0fbe096b66a

    Landscape priority areas (West)

    West Gippsland Regional Catchment Strategy Landscape Priorities WGCMA GIP 201205

    https://data.bioregionalassessments.gov.au/datastore/dataset/6c8c0a81-ba76-4a8a-b11a-1c943e744f00

    Plantation Forests Public Land Management(PLM25) DEPI GIP 201410

    https://data.bioregionalassessments.gov.au/datastore/dataset/495d0e4e-e8cd-4051-9623-98c03a4ecded

    and additional data identifying "Vulnerable" species from the datasets:

    Victorian Biodiversity Atlas flora - 1 minute grid summary

    https://data.bioregionalassessments.gov.au/datastore/dataset/d40ac83b-f260-4c0b-841d-b639534a7b63

    Victorian Biodiversity Atlas fauna - 1 minute grid summary

    https://data.bioregionalassessments.gov.au/datastore/dataset/516f9eb1-ea59-46f7-84b1-90a113d6633d

    A number of restricted datasets were used to compile this database. These are listed in the accompanying documentation and below:

    • The Collaborative Australian Protected Areas Database (CAPAD) 2010

    • Environmental Assets Database (Commonwealth Environmental Water Holder)

    • Key Environmental Assets of the Murray-Darling Basin

    • Communities of National Environmental Significance Database

    • Species of National Environmental Significance

    • Ramsar Wetlands of Australia 2011

    Dataset Citation

    Bioregional Assessment Programme (2015) GIP AssetList Database v1.2 20150130. Bioregional Assessment Derived Dataset. Viewed 07 February 2017, http://data.bioregionalassessments.gov.au/dataset/6f34129d-50a3-48f7-996c-7a6c9fa8a76a.

    Dataset Ancestors

  20. r

    Detect Overload Dataset

    • universe.roboflow.com
    zip
    Updated May 27, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2023). Detect Overload Dataset [Dataset]. https://universe.roboflow.com/project-0wn6x/detect-overload/dataset/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 27, 2023
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Overload Or Normal Bounding Boxes
    Description

    Detect Overload

    ## Overview
    
    Detect Overload is a dataset for object detection tasks - it contains Overload Or Normal annotations for 9,985 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
data.usaid.gov (2024). Measure Evaluation [Dataset]. https://catalog.data.gov/dataset/measure-evaluation
Organization logo

Measure Evaluation

Explore at:
Dataset updated
Jun 8, 2024
Dataset provided by
United States Agency for International Developmenthttps://usaid.gov/
Description

MEASURE Evaluation is the USAID Global Health Bureau's primary vehicle for supporting improvements in monitoring and evaluation in population, health and nutrition worldwide. They help to identify data needs, collect and analyze technically sound data, and use that data for health decision making. Some MEASURE Evaluation activities involve the collection of innovative evaluation data sets in order to increase the evidence-base on program impact and evaluate the strengths and weaknesses of recent evaluation methodological developments. Many of these data sets may be available to other researchers to answer questions of particular importance to global health and evaluation research. Some of these data sets are being added to the Dataverse on a rolling basis, as they become available. This collection on the Dataverse platform contains a growing variety and number of global health evaluation datasets.

Search
Clear search
Close search
Google apps
Main menu