5 datasets found
  1. f

    Evaluation metrics of the ML models performance built for student and...

    • plos.figshare.com
    xls
    Updated Oct 25, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Clare Rainey; Angelina T. Villikudathil; Jonathan McConnell; Ciara Hughes; Raymond Bond; Sonyia McFadden (2023). Evaluation metrics of the ML models performance built for student and qualified radiographer groups. [Dataset]. http://doi.org/10.1371/journal.pdig.0000229.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Oct 25, 2023
    Dataset provided by
    PLOS Digital Health
    Authors
    Clare Rainey; Angelina T. Villikudathil; Jonathan McConnell; Ciara Hughes; Raymond Bond; Sonyia McFadden
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Results are based on an average of the 3-fold cross validation. The top performing ML model and their metrics are highlighted for the comparison. SVM denotes for Support Vector Machines, NB for Naive Bayes, k-NN for K-Nearest Neighbour, LR for Logistic Regression, RF for Random Forest.

  2. w

    Global Survey Software for Nonprofit Market Research Report: By Deployment...

    • wiseguyreports.com
    Updated Sep 15, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Global Survey Software for Nonprofit Market Research Report: By Deployment Model (Cloud-Based, On-Premises, Hybrid), By End User (Charities, Foundations, Educational Institutions, Healthcare Organizations), By Features (Survey Design Tools, Data Analysis Tools, Reporting Features, Collaboration Tools), By Type of Surveys (Donor Feedback Surveys, Volunteer Satisfaction Surveys, Event Feedback Surveys, Program Evaluation Surveys) and By Regional (North America, Europe, South America, Asia Pacific, Middle East and Africa) - Forecast to 2035 [Dataset]. https://www.wiseguyreports.com/reports/survey-software-for-non-profit-market
    Explore at:
    Dataset updated
    Sep 15, 2025
    License

    https://www.wiseguyreports.com/pages/privacy-policyhttps://www.wiseguyreports.com/pages/privacy-policy

    Time period covered
    Sep 25, 2025
    Area covered
    Global
    Description
    BASE YEAR2024
    HISTORICAL DATA2019 - 2023
    REGIONS COVEREDNorth America, Europe, APAC, South America, MEA
    REPORT COVERAGERevenue Forecast, Competitive Landscape, Growth Factors, and Trends
    MARKET SIZE 20242113.7(USD Million)
    MARKET SIZE 20252263.7(USD Million)
    MARKET SIZE 20354500.0(USD Million)
    SEGMENTS COVEREDDeployment Model, End User, Features, Type of Surveys, Regional
    COUNTRIES COVEREDUS, Canada, Germany, UK, France, Russia, Italy, Spain, Rest of Europe, China, India, Japan, South Korea, Malaysia, Thailand, Indonesia, Rest of APAC, Brazil, Mexico, Argentina, Rest of South America, GCC, South Africa, Rest of MEA
    KEY MARKET DYNAMICSincreasing demand for data analytics, growing focus on donor engagement, rise in remote survey solutions, need for cost-effective software, expanding nonprofit sector involvement
    MARKET FORECAST UNITSUSD Million
    KEY COMPANIES PROFILEDQuestionPro, Zoho Survey, SoGoSurvey, Typeform, SurveyGizmo, SurveyMonkey, Qualtrics, Alchemer, Google Forms, LimeSurvey, Get Feedback, Formstack
    MARKET FORECAST PERIOD2025 - 2035
    KEY MARKET OPPORTUNITIESCloud-based solutions expansion, Enhanced data analytics integration, Mobile survey accessibility improvements, User-friendly interface demand, Increased nonprofit digital transformation efforts.
    COMPOUND ANNUAL GROWTH RATE (CAGR) 7.1% (2025 - 2035)
  3. B

    Directional Change in Polygonal Distributions: Comparing human and...

    • datasetcatalog.nlm.nih.gov
    • borealisdata.ca
    Updated Dec 22, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Phillips, Sierra; Robertson, Colin (2020). Directional Change in Polygonal Distributions: Comparing human and computational directional relations in GIS data [Dataset]. http://doi.org/10.5683/SP2/2XFPTP
    Explore at:
    Dataset updated
    Dec 22, 2020
    Authors
    Phillips, Sierra; Robertson, Colin
    Description

    Existing methods for calculating directional relations in polygons (i.e. the directional similarity model, the cone-based model, and the modified cone-based model) were compared to human perceptions of change through an online survey. The results from this survey provide the first empirical validation of computational approaches to calculating directional relations in polygonal spatial data. We have found that while the evaluated methods generally agreed with each other, they varied in their alignment with human perceptions of directional relations. Specifically, translation transformations of the target and reference polygons showed greatest discrepancy to human perceptions and across methods. The online survey was developed using Qualtrics Survey Software, and participants were recruited via online messaging on social media (i.e., Twitter) with hashtags related to geographic information science. In total sixty-one individuals responded to the survey. This survey consisted of nine questions. For the first question, participants indicated how many years they have worked with GIS and/or spatial data. For the remaining eight questions, participants ranked pictorial database scenes according to degrees of their match to query scenes. Each of these questions represented a test case that Goyal and Egenhofer (2001) used to empirically evaluate the directional similarity model; participants were randomly presented with four of these questions. The query scenes were created using ArcMap and contained a pair of reference and target polygons. The database scenes were generated by gradually changing the geometry of the target polygon within each query scene. The relations between the target and reference polygon varied by the type of movement, the scaling change of the polygon, and changes in rotation. The scenarios were varied in order to capture a representative range of variability in polygon movements and changes in real world data. The R statistical computing environment was used to determine the similarity value that corresponds with each database scene based on the directional similarity model, the cone-based model, and the modified cone-based model. Using the survey responses, the frequency of first, second, third, etc. ranks were calculated for each database scene. Weight variables were multiplied by the frequencies to create an overall rank based on participant responses. A rank of one was weighted as a five, a rank of two was weighted as a four, and so on. Spearman’s rank-order correlation was used to measure the strength and direction of association between the rank determined using the three models and the rank determined using participant responses.

  4. n

    National mileage fee survey

    • data.niaid.nih.gov
    • search.dataone.org
    • +1more
    zip
    Updated Mar 18, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Clare Nelson (2024). National mileage fee survey [Dataset]. http://doi.org/10.5061/dryad.rv15dv4f0
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 18, 2024
    Dataset provided by
    University of Vermont
    Authors
    Clare Nelson
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    As governing bodies continue to explore mileage fees as an alternative to the gas tax, of the uncertainty surrounding public support remains a critical barrier to policy uptake. This study examines the extent to which public perceptions of mileage fees are guided by misinformation or lack of information using a national, internet-based survey. We use hypothetical voting opportunities to gather respondent support for mileage fees, coupled with educational treatments that address mileage fee fairness, privacy, and costs. The findings indicate that respondents are largely misinformed or lack information about mileage fees and the gas tax. Pre-education, only 32% of respondents supported the policy, but post-education, 46% of respondents supported the policy. Through binomial, multinomial, and fixed effect modeling, we examined the factors associated with policy support, changes in policy support, and the educational treatments. Ultimately, our findings indicate that education can play a key role in increasing support for a mileage fee policy as an alternative to the gas tax. Methods An internet-based survey was used to assess nation-wide support for replacing state gas taxes with a mileage fee. Respondents were given three opportunities to vote for or against a mileage fee replacement, with educational treatments in between votes. The impact of education on respondent voting was evaluated using a variety of regression modelling methods. Respondents were recruited to the survey through Qualtrics. This company used quota-based sampling schemes to field the survey to every U.S. state. Since this research hypothesized that mileage fee opinions may be in part due to low information about mileage fees, we opted to omit respondents from states where widespread mileage fee education or mileage fee policies were implemented. As of July 2023, we identified California, Oregon, Utah and Hawaii as states where residents were likely meaningfully more educated about mileage fees and chose not to survey those populations. Three versions of the survey were released, each proposing mileage fees are collected using a different method. The three options proposed collecting mileage fees using (1) an annual odometer reading, (2) a plug-in device without GPS technology, and (3) a plug-in device with GPS technology. Besides differing in the method displayed for collecting mileage information, the surveys were identical.

  5. w

    Global Online Survey Software Market Research Report: By Application...

    • wiseguyreports.com
    Updated Dec 31, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Global Online Survey Software Market Research Report: By Application (Customer Feedback, Market Research, Employee Engagement, Academic Research), By Deployment Model (Cloud-Based, On-Premise), By End User (Businesses, Educational Institutions, Government Agencies, Non-Profit Organizations), By Survey Type (Online Surveys, Mobile Surveys, Multimedia Surveys) and By Regional (North America, Europe, South America, Asia Pacific, Middle East and Africa) - Forecast to 2035 [Dataset]. https://www.wiseguyreports.com/de/reports/online-survey-software-market
    Explore at:
    Dataset updated
    Dec 31, 2024
    License

    https://www.wiseguyreports.com/pages/privacy-policyhttps://www.wiseguyreports.com/pages/privacy-policy

    Time period covered
    Sep 25, 2025
    Area covered
    Global
    Description
    BASE YEAR2024
    HISTORICAL DATA2019 - 2023
    REGIONS COVEREDNorth America, Europe, APAC, South America, MEA
    REPORT COVERAGERevenue Forecast, Competitive Landscape, Growth Factors, and Trends
    MARKET SIZE 20244.37(USD Billion)
    MARKET SIZE 20254.71(USD Billion)
    MARKET SIZE 203510.0(USD Billion)
    SEGMENTS COVEREDApplication, Deployment Model, End User, Survey Type, Regional
    COUNTRIES COVEREDUS, Canada, Germany, UK, France, Russia, Italy, Spain, Rest of Europe, China, India, Japan, South Korea, Malaysia, Thailand, Indonesia, Rest of APAC, Brazil, Mexico, Argentina, Rest of South America, GCC, South Africa, Rest of MEA
    KEY MARKET DYNAMICSgrowing demand for data-driven insights, increasing use of mobile surveys, rising need for consumer feedback, advancements in survey technology, competitive pricing and subscription models
    MARKET FORECAST UNITSUSD Billion
    KEY COMPANIES PROFILEDFormstack, SurveyGizmo, JotForm, Microsoft Forms, QuestionPro, Typeform, Qualtrics, GetFeedback, SurveyMonkey, Google Forms, Zoho Survey, Alibaba Cloud
    MARKET FORECAST PERIOD2025 - 2035
    KEY MARKET OPPORTUNITIESAI integration for data analysis, Mobile-friendly survey solutions, Enhanced data security features, Integration with CRM systems, Customizable survey templates and branding
    COMPOUND ANNUAL GROWTH RATE (CAGR) 7.8% (2025 - 2035)
  6. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Clare Rainey; Angelina T. Villikudathil; Jonathan McConnell; Ciara Hughes; Raymond Bond; Sonyia McFadden (2023). Evaluation metrics of the ML models performance built for student and qualified radiographer groups. [Dataset]. http://doi.org/10.1371/journal.pdig.0000229.t001

Evaluation metrics of the ML models performance built for student and qualified radiographer groups.

Related Article
Explore at:
xlsAvailable download formats
Dataset updated
Oct 25, 2023
Dataset provided by
PLOS Digital Health
Authors
Clare Rainey; Angelina T. Villikudathil; Jonathan McConnell; Ciara Hughes; Raymond Bond; Sonyia McFadden
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

Results are based on an average of the 3-fold cross validation. The top performing ML model and their metrics are highlighted for the comparison. SVM denotes for Support Vector Machines, NB for Naive Bayes, k-NN for K-Nearest Neighbour, LR for Logistic Regression, RF for Random Forest.

Search
Clear search
Close search
Google apps
Main menu