Facebook
TwitterThe Board would use the FR 3076 to seek input from users or potential users of the Board's public website, social media, outreach, and communication responsibilities. The survey would be conducted with a diverse audience of consumers, banks, media, government, educators, and others to gather information about their visit to the Board's public website. Responses to the survey would be used to help improve the usability and offerings on the Board's public website and other online public communications. The frequency of the survey and content of the questions would vary as needs arise for feedback on different resources and from different audiences. The Board anticipates the FR 3076 may be conducted up to 12 times per year, although the survey may not be conducted that frequently. In addition, the Board anticipates conducting up to four focus group sessions per year.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The data were collected during the user-centered analysis of usability of 41 open government data portals including EU27, applying a common methodology to them, considering aspects such as specification of open data set, feedback and requests, further broken down into 14 sub-criteria. Each aspect was assessed using a three-level Likert scale (fulfilled - 3, partially fulfilled - 2, and unfulfilled – 1), that belongs to the acceptability tasks. This dataset summarises a total of 1640 protocols obtained during the analysis of the selected portals carried out by 40 participants, who were selected on a voluntary basis. This is complemented with 4 summaries of these protocols, which include calculated average scores by category, aspect and country. These data allow comparative analysis of the national open data portals, help to find the key challenges that can negatively impact users’ experience, and identifies portals that can be considered as an example for the less successful open data portals.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Norwegian public reports on digital health, 2021-2024
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This data set was acquired using a survey which intends to measure: • Participants previous experience of cybersecurity training • Participants perception of ideal cybersecurity training • Participants perception of a specific cybersecurity training type called ContextBased MicroTraining • What usability aspects the participants find most important for security features Data was acquired from Sweden, UK and Italy to allow for comparative analysis. Demographic data was collected to allow for further analysis based on those. The files included in this data set are: • Completesurvey: This document includes the full survey presented to the participants. • Dataset: This file contains the variables and data for the different questions (available as .sav (SPSS and .csv)). • Var_info: contains information about the variables in the dataset • Overview: Contains frequency tables for the survey question (for the complete data set) • Sweden, UK, and Italy: Contains frequency tables for the survey questions divided by national sample groups.
Se attahed description
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset contains the data extracted from the literature for the Systematic Literature Review and is referenced or used in the extended abstract titled "How do Non-profit Open data Intermediaries enhance Open data Usability? A Systematic Literature Review", submitted to the 18th International Symposium on Open Collaboration (Companion), September 6–10, 2022, Madrid, Spain. https://doi.org/10.1145/3555051.3555061
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Description of documents about experiment realized with Usevalia
The validation of the Usevalia tool was divided into two parts: (1) students enrolled on an HCI (Human-Computer Interaction) course of the BSc. in Computer Engineering at the University of Murcia (Spain) in the academic year 2020/ 2021, followed by (2) a validation with five experts in usability audits.This document describes the files that were used during the validation process: 1. The Usevalia (Responses of students) document shows the answers of student afters that it responds to perception about Usevalia. Also, it documents show Means, standard deviations, and medians of students' perceptions. 2. Usevalia -Form of students document shows the Questionnaire 1 that was applied to evaluate the perception of students about Usevalia. 3. Responses of Evaluation with experts document shows points those experts gave to evaluate his perception of Usevalia. This consists in a comparation between use Usevalia or use Usability Datalogger tool.
Contact InformationFor further information or inquiries about this dataset, please contact [Raimel Sobrino Duque] at [raimel.sobrino@um.es].
Facebook
TwitterThe Usability Team of the National Institute of Standards and Technology's (NIST) Public Safety Communications Research (PSCR) program works to identify issues faced by first responders surrounding the use of their existing and emerging public safety communication technology. The team conducted an exploratory, sequential, mixed-methods study to gather insights into first responders' needs for and problems experienced with communication technology. The multi-phase study included in-depth interviews with 193 first responders in Phase 1, followed by a nationwide survey of 7,182 first responders in Phase 2, across four public safety disciplines, Communication Center & 9-1-1 Services (COMMS), Emergency Medical Services (EMS), Fire Service (FF), and Law Enforcement (LE).The data consists of two datasets: (1) Phase 1 data from 193 interviews with first responders from four disciplines (COMMS, EMS, FF, LE) including direct quotes from interviewees categorized by codes/subcodes with demographic information included; (2) Phase 2 survey data from 7,182 first responders from four disciplines (COMMS, EMS, FF, LE) including their responses on what technology they have and use, along with their needs for and problems experienced with communication technology; demographic information is also included.
Facebook
TwitterThis database includes the full systematic review extracted data used in the study titled "Usability Evaluation in Virtual Reality: A Systematic Review".
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Supplementary material for a relevance and usability evaluation in a data portal for biodiversity research.
Data portal: GFBio (https://www.gfbio.org)
Evaluation time: February 2016 (at that time the search index consisted of ~ 2 Mio datasets)
Eight domain experts rated the Top25 search results of 16 provided search questions on a 7-point-Likert scale from 0 (irrelevant) to 6 (highly relevant). Afterwards, we asked the users to provide and rate up to two own queries.
The users also rated 28 statements in a subsequent usability evaluation on a 5-point Likert scale from 'completely disagree' to 'highly agree'. For some statements, only binary ratings were given.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset reports on the usability of Open Data Portals as reported by the European public. Knowledge of open science and data concepts is also reported.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Additional data related to usability evaluation of the ANON system.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Data to understand how to transform raw Google Analytic data into qualitative usability metrics.
Google Analytics data (Title and Duration in seconds) Hotjar (NPS) Survey Monkey (NPS)
Bebideria.com.br´s developers and its readers Google´s developers and communities that keep it running and free Hotjar´s team and CEO who supported me exporting raw data and even improving the app towards my experiment´s needs. Finally UDESC´s PPGDesign professors who guided and oriented this study
Let´s make better products!
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This paper aims to investigate the usefulness of three user testing methods (observation, and using both quantitative and qualitative data from a post-test questionnaire) in terms of their ability or inability to find specific usability problems on university websites. The results showed that observation was the best method, compared to the other two, in identifying large numbers of major and minor usability problems on university websites. The results also showed that employing qualitative data from a post-test questionnaire was a useful complementary method since this identified additional usability problems that were not identified by the observation method. However, the results showed that the quantitative data from the post-test questionnaire were inaccurate and ineffective in terms of identifying usability problems on such websites.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
1. File: Maqbool_SLR_2023_JSS_Inclusion_610.xlsm.
There are two sheets in file. A. Final_Selected_papers, (sheet) aims to provide a comprehensive list of articles (n=610) selected for our SLR, whose process and data items specified and detailed in the article.
B. Rejected_After_Full_Review, (sheet) aims to provide a comprehensive list of articles (n=153) rejected for our SLR based on inclusion or exclusion criteria after full article review process, whose process and data items specified and detailed in the article.
2. File: Maqbool_SLR_2023_JSS_Data_Extraction_Form.pdf
This file aims to provide a comprehensive data extraction form, whose process and data items specified and detailed in the article. The form was used to elicit data relevant to answer the postulated research questions. This form served as the foundation for the additional information presented in the final paper.
3. File: Bilal_SLR_JSS_Primary_Studies_References.pdf
This file contains the primary selected studies (n=610) for the systematic literature review. The systematic review aims to explore and analyse research literature related to usability evaluation methods and their effectiveness and efficiency in the context of digital health applications. This file will help to identity reference of the primary selected study that is cited in the paper using a prefix (S, e.g. S137). This file can be used for peer review, ensuring the reliability and correctness of findings.
4. File: SLR_Analysis_updated_2023.nvp
The data extracted from each article was recorded in a worksheet (Excel) and then coded in NVivo 12/14 to categorise (classify) and compare extracted facets. Each data item's category and related paper id are coded in the given Excel file. Papers were not included in the NVivo project due to copyright concerns. Relevant papers can be tracked using the provided spreadsheet file (see Paper ID cell).
The file(s) are cleaned as much as reasonable and other raw data is removed. This file does not include the matrix tables or codes, which were produced and analysed run-time during the analysis phase. Although the given package allows for re-generation.
-------- UPDATE: --------
5. File: SLR_Analysis_updated_2023_for_MAC.nvpx
This is an extra copy of NVivo project, created for the MAC user.
This replication package is produced and published here. Research conducted by Karlstad University researchers. We publish data sets to improve coverage and accessibility. For more info or concerns, contact us.
Linked paper published at: Maqbool, Bilal, and Sebastian Herold. "Potential effectiveness and efficiency issues in usability evaluation within digital health: A systematic literature review." Journal of Systems and Software (2023): 111881.
DOI: https://doi.org/10.1016/j.jss.2023.111881
This work was funded, in parts, by Region Värmland through the DHINO project, Sweden (Grant: RUN/220266) and Vinnova through the DigitalWell Arena (DWA) project, Sweden (Grant: 2018-03025).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data from the article: Assessment Measures in Game-based Learning Research: A Systematic Review.
Data management (identification, screening, and eligibility phases): StArt software
Systematic review: supplementary data from proceedings
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
IntroductionEnsuring high quality and reusability of personal health data is costly and time-consuming. An AI-powered virtual assistant for health data curation and publishing could support patients to ensure harmonization and data quality enhancement, which improves interoperability and reusability. This formative evaluation study aimed to assess the usability of the first-generation (G1) prototype developed during the AI-powered data curation and publishing virtual assistant (AIDAVA) Horizon Europe project.MethodsIn this formative evaluation study, we planned to recruit 45 patients with breast cancer and 45 patients with cardiovascular disease from three European countries. An intuitive front-end, supported by AI and non-AI data curation tools, is being developed across two generations. G1 was based on existing curation tools and early prototypes of tools being developed. Patients were tasked with ingesting and curating their personal health data, creating a personal health knowledge graph that represented their integrated, high-quality medical records. Usability of G1 was assessed using the system usability scale. The subjective importance of the explainability/causability of G1, the perceived fulfillment of these needs by G1, and interest in AIDAVA-like technology were explored using study-specific questionnaires.ResultsA total of 83 patients were recruited; 70 patients completed the study, of whom 19 were unable to successfully curate their health data due to configuration issues when deploying the curation tools. Patients rated G1 as marginally acceptable on the system usability scale (59.1 ± 19.7/100) and moderately positive for explainability/causability (3.3–3.8/5), and were moderately positive to positive regarding their interest in AIDAVA-like technology (3.4–4.4/5).DiscussionDespite its marginal acceptability, G1 shows potential in automating data curation into a personal health knowledge graph, but it has not reached full maturity yet. G1 deployed very early prototypes of tools planned for the second-generation (G2) prototype, which may have contributed to the lower usability and explainability/causability scores. Conversely, patient interest in AIDAVA-like technology seems quite high at this stage of development, likely due to the promising potential of data curation and data publication technology. Improvements in the library of data curation and publishing tools are planned for G2 and are necessary to fully realize the value of the AIDAVA solution.
Facebook
TwitterSurvey DataResponses to the SurveyUsability Evaluation DataDashboard usability evaluation data
Facebook
TwitterThe complete study material used for the usability study for the development of Data Cart.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This data file contains all data that has been used for the development of the eHealth Usability Benchmarking Instrument (HUBBI). It contains the anonymized survey data of 148 participants that took part in this study. The data set includes includes data on general demographics, task performance and the scores on the HUBBI questionnaire.
Facebook
Twitterhttps://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
According to our latest research, the global usability test summarizer market size reached USD 1.08 billion in 2024, with a robust compound annual growth rate (CAGR) of 17.2% projected through 2033. The market is expected to attain a value of USD 4.99 billion by 2033, driven primarily by the increasing adoption of digital transformation initiatives, the proliferation of user experience (UX) optimization practices, and the growing reliance on data-driven decision-making across industries. As organizations seek to streamline and automate the process of extracting actionable insights from usability tests, demand for advanced summarization solutions continues to surge globally.
The growth of the usability test summarizer market is being propelled by the escalating complexity of digital products and the corresponding need for comprehensive usability testing. Organizations across diverse sectors, from e-commerce to healthcare, are investing heavily in user research to ensure optimal product usability. However, the manual process of analyzing and summarizing usability test results is both time-consuming and resource-intensive. This has led to a significant surge in the adoption of automated usability test summarizer tools, which leverage artificial intelligence and natural language processing to rapidly distill large volumes of qualitative and quantitative test data into clear, actionable summaries. The integration of these tools into product development lifecycles has proven instrumental in accelerating feedback loops and enhancing overall product quality, thereby driving market expansion.
Another key growth factor is the increasing emphasis on customer-centric design philosophies across enterprises of all sizes. As digital competition intensifies, businesses are prioritizing user feedback at every stage of the product development process. Usability test summarizer solutions play a pivotal role in this context by enabling product teams, UX designers, and decision-makers to quickly interpret and act on user feedback from usability studies. The ability to efficiently synthesize findings from diverse user groups and test scenarios is particularly valuable in agile development environments, where rapid iteration is essential. Furthermore, the growing adoption of remote usability testing, fueled by the rise of distributed workforces and global user bases, is amplifying the demand for scalable summarization tools that can handle large, geographically dispersed data sets.
The usability test summarizer market is also benefiting from technological advancements in artificial intelligence and machine learning, which have significantly enhanced the accuracy and efficiency of summarization algorithms. Vendors are increasingly focusing on developing solutions that can not only summarize textual data but also integrate multimedia inputs such as video and audio recordings from usability sessions. This holistic approach to usability data analysis is enabling organizations to uncover deeper insights into user behavior and pain points. Additionally, the emergence of cloud-based deployment models is expanding market accessibility, allowing small and medium-sized enterprises (SMEs) to leverage sophisticated usability summarization capabilities without the need for significant upfront investment in IT infrastructure.
From a regional perspective, North America currently dominates the usability test summarizer market, accounting for the largest revenue share in 2024. The region's leadership is attributed to the high concentration of technology-driven enterprises, robust investment in UX research, and early adoption of AI-powered analytics tools. Europe follows closely, with substantial market growth driven by stringent digital accessibility regulations and a mature digital ecosystem. Asia Pacific is emerging as the fastest-growing region, fueled by rapid digitalization, expanding e-commerce sectors, and increasing awareness of the importance of user experience in product differentiation. Latin America and the Middle East & Africa are also witnessing steady growth, supported by ongoing digital transformation initiatives and the gradual adoption of advanced usability testing practices.
The usability test summarizer market is segmented by component into software and services, each playing a crucial role in the overall value propositi
Facebook
TwitterThe Board would use the FR 3076 to seek input from users or potential users of the Board's public website, social media, outreach, and communication responsibilities. The survey would be conducted with a diverse audience of consumers, banks, media, government, educators, and others to gather information about their visit to the Board's public website. Responses to the survey would be used to help improve the usability and offerings on the Board's public website and other online public communications. The frequency of the survey and content of the questions would vary as needs arise for feedback on different resources and from different audiences. The Board anticipates the FR 3076 may be conducted up to 12 times per year, although the survey may not be conducted that frequently. In addition, the Board anticipates conducting up to four focus group sessions per year.