Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains data collected during a study ("An Integrated Usability Framework for Evaluating Open Government Data Portals: Comparative Analysis of EU and GCC Countries") conducted by Fillip Molodtsov and Anastasija Nikiforova (University of Tartu).
It being made public both to act as supplementary data for the paper and in order for other researchers to use these data in their own work potentially contributing to the improvement of current data ecosystems and develop user-friendly, collaborative, robust, and sustainable open data portals.
Purpose of the study
This paper develops an integrated framework for evaluating OGD portal effectiveness that accommodates user diversity (regardless of their data literacy and language), evaluates collaboration and participation, and the ability of users to explore and understand the data provided through them.
The framework is validated by applying it to 33 national portals across European Union (EU) and Gulf Cooperation Council (GCC) countries, as a result of which we rank OGD portals, identify some good practices that lower-performing portals can learn from, and common shortcomings.
Methodology
(1) systematic literature review to establish a knowledge base and identify frameworks have been used to evaluate OGD portals, we conducted a systematic literature review - Dataset_ Usability_Framework_SLR;
(2) development of the Integrated Usability Framework for Evaluating Open Government Data Portals, which content is based on the outputs of the first step, along with selected articles of experts in portal design, and an exploratory assessment of the French, Irish, Estonian and Spanish portals - Dataset_Integrated_Usability_Framework;
(3) data collection, that is a completion of the protocol developed in the previous step by analysing 34 national OGD portals of the EU and GCC countries. When all individual protocols were collected, the total score are calculated using the weighting system. The average scores are calculated for the EU and GCC. The portals are ranked. The top portals (best performers) are determined for each dimension - Dataset_EU_GCC_OGDportal_Usability_results_clustering.
(4) identification of relationships and patterns among different portals based on their performance metrics as a result of the cluster analysis. By calculating the average dimensional scores of portals from both types of clusters, their performance across multiple dimensions is evaluated - Dataset_EU_GCC_OGDportal_Usability_results_clustering.
For more details see Molodtsov, F., Nikiforova, A. (2024). “An Integrated Usability Framework for Evaluating Open Government Data Portals: Comparative Analysis of EU and GCC Countries”. In Proceedings of the 25th Annual International Conference on Digital Government Research (DGO 2024), June 11--14, 2024, Taipei, Taiwan, 10.1145/3657054.3657159
Format of the file
.xls, .csv
Licenses or restrictions
CC-BY
The Board would use the FR 3076 to seek input from users or potential users of the Board's public website, social media, outreach, and communication responsibilities. The survey would be conducted with a diverse audience of consumers, banks, media, government, educators, and others to gather information about their visit to the Board's public website. Responses to the survey would be used to help improve the usability and offerings on the Board's public website and other online public communications. The frequency of the survey and content of the questions would vary as needs arise for feedback on different resources and from different audiences. The Board anticipates the FR 3076 may be conducted up to 12 times per year, although the survey may not be conducted that frequently. In addition, the Board anticipates conducting up to four focus group sessions per year.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The data were collected during the user-centered analysis of usability of 41 open government data portals including EU27, applying a common methodology to them, considering aspects such as specification of open data set, feedback and requests, further broken down into 14 sub-criteria. Each aspect was assessed using a three-level Likert scale (fulfilled - 3, partially fulfilled - 2, and unfulfilled – 1), that belongs to the acceptability tasks. This dataset summarises a total of 1640 protocols obtained during the analysis of the selected portals carried out by 40 participants, who were selected on a voluntary basis. This is complemented with 4 summaries of these protocols, which include calculated average scores by category, aspect and country. These data allow comparative analysis of the national open data portals, help to find the key challenges that can negatively impact users’ experience, and identifies portals that can be considered as an example for the less successful open data portals.
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
The dataset contains the data extracted from the literature for the Systematic Literature Review and is referenced or used in the extended abstract titled "How do Non-profit Open data Intermediaries enhance Open data Usability? A Systematic Literature Review", submitted to the 18th International Symposium on Open Collaboration (Companion), September 6–10, 2022, Madrid, Spain. https://doi.org/10.1145/3555051.3555061
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This data set was acquired using a survey which intends to measure: • Participants previous experience of cybersecurity training • Participants perception of ideal cybersecurity training • Participants perception of a specific cybersecurity training type called ContextBased MicroTraining • What usability aspects the participants find most important for security features Data was acquired from Sweden, UK and Italy to allow for comparative analysis. Demographic data was collected to allow for further analysis based on those. The files included in this data set are: • Completesurvey: This document includes the full survey presented to the participants. • Dataset: This file contains the variables and data for the different questions (available as .sav (SPSS and .csv)). • Var_info: contains information about the variables in the dataset • Overview: Contains frequency tables for the survey question (for the complete data set) • Sweden, UK, and Italy: Contains frequency tables for the survey questions divided by national sample groups.
Se attahed description
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The data include:Demographic data of the participants including: gender
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Science gateways, offering access to scientific tools and large amounts of data in the form of web portals and other applications, are often run by the research groups who produce these tools and data. As the prevalence of these gateways increases, a key issue that arises is that of usability. The developers of gateways want users to be able to interact with the gateways easily and efficiently. However, design choices are typically made by members of the research groups involved and can be ad-hoc, biased, and generally unpredictable. To ensure users can make full use of a gateway, the need arises for a usability study.The resources needed to do a large-scale usability study are quite significant. Ideal usability testing would involve comparing multiple versions of a user interface to see which version users prefer. In addition to the time and effort needed to produce these versions, many participants need to be found, scripts need to be written, and a large amount of time needs to be dedicated for the testing session itself. While the benefits of large-scale usability testing are obvious, research groups often have neither the user numbers nor the rate of development that would warrant such an effort. Additionally, these groups often do not have the resources needed to fund/staff such an endeavor. When starting a usability testing initiative at the Materials Project, our approach was to make usability testing as easy as possible while still obtaining valuable information. We focused on a ``one morning a month'' model and held 3 testing sessions over the course of 3 months, with 10 participants in total. Each session made use of a script to keep participant experience consistent. Scripts were slightly tweaked between sessions to obtain information on the impact of the script itself on user behavior. Each session was streamed live to observers in another room as well as recorded for later review, and observations were noted immediately after each session was over. With only 10 participants and a minimal budget, we were able to draw conclusions about web design and user behavior specific to our portal. We hope these conclusions, as well as our notes on the testing process itself, prove useful to developers of other science gateways.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset reports on the usability of Open Data Portals as reported by the European public. Knowledge of open science and data concepts is also reported.
The Usability Team of the National Institute of Standards and Technology's (NIST) Public Safety Communications Research (PSCR) program works to identify issues faced by first responders surrounding the use of their existing and emerging public safety communication technology. The team conducted an exploratory, sequential, mixed-methods study to gather insights into first responders' needs for and problems experienced with communication technology. The multi-phase study included in-depth interviews with 193 first responders in Phase 1, followed by a nationwide survey of 7,182 first responders in Phase 2, across four public safety disciplines, Communication Center & 9-1-1 Services (COMMS), Emergency Medical Services (EMS), Fire Service (FF), and Law Enforcement (LE). The data consists of two datasets: (1) Phase 1 data from 193 interviews with first responders from four disciplines (COMMS, EMS, FF, LE) including direct quotes from interviewees categorized by codes/subcodes with demographic information included; (2) Phase 2 survey data from 7,182 first responders from four disciplines (COMMS, EMS, FF, LE) including their responses on what technology they have and use, along with their needs for and problems experienced with communication technology; demographic information is also included.
https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
The global usability testing tools market size was USD 1.19 Billion in 2023 and is projected to reach USD 7.36 Billion by 2032, expanding at a CAGR of 19.95% during 2024–2032. The market growth is attributed to the growing focus on mobile application development across the globe.
Rising focus on mobile application development is a key driver of the usability testing tools market. Developers are increasingly using these tools to test the usability of mobile applications to ensure they are intuitive, easy to navigate, and user-friendly. Usability testing tools provide insights into how users interact with the application, helping developers make necessary adjustments to enhance the user experience. This rising focus on mobile application development is propelling the usability testing tools market.
The European Union law, General Data Protection Regulation (GDPR) has a significant implication for usability testing tools market. Companies are required to obtain explicit consent from users before collecting personal data, impacting how usability testing tools gather and process data.
Artificial Intelligence has a significant impact on usability testing tools market. AI's integration into these tools has enhanced their efficiency, accuracy, and speed, thereby revolutionizing the way usability testing is conducted. AI-powered usability testing tools are capable of analyzing vast amounts of data in real-time, providing valuable insights that help improve the user experience. These tools use machine learning algorithms to identify patterns and trends, which help in predicting user behavior and preferences.
AI has enabled automated usability testing, reducing the need for manual intervention and thereby saving time and resources. This automation eliminates human bias, leading to reliable and objective results. The use of AI in usability testing tools has facilitated personalized user experiences by understanding and adapting to individual user behaviors. The impact of AI on the usability testing tools market is profound, offering significant benefits in terms of efficiency, accuracy, and personalization.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This dataset is the result of a study, in which we analyzed parallels and differences between clicks as well as eye movements on two different digital library homepages. For this analysis we used diverse tracking tools for mouse clicks and eye tracking data that where further studied with respect to specific areas of interest (AOI).
The dataset contains two screenshots indicating the areas of interest (AOIs; entitled “AreasOfInterest_Kartenportal.jpg and AreasOfInterest_Webportal.jpg), which separate the homepages into analyzable parts. It also contains eight screenshots of the homepages containing the total amount of collected clicks (each name starting with “clicks”) and two screenshots with the eye tracking heat maps (starting with “Heatmap”). The screenshots have directly been extracted from the click and eye tracking tools and matched with the before mentioned AOIs in order to gain the total count of clicks and views as well as the view duration the concerned area.
All data are synthesized in a document containing three sheets with different tables: a first one with the initial data compilation for all AOIs of the two analyzed homepages (entitled “Data”), a second one with a more visual compiled data analysis for both homepages and all AOIs (entitled “Data2) and last one with the duration of the view as well as the duration of the fixation and the compiled click data (entitled « Eye tracking study data »).
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Developers were asked: "When you can find the government data you need, how would you generally describe its accessibility and usability? Please rate the following statements." The statements deal with API access, data quality, documentation and other factors that make government data usable for developers.
Survey DataResponses to the SurveyUsability Evaluation DataDashboard usability evaluation data
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Supplementary material for a relevance and usability evaluation in a data portal for biodiversity research.
Data portal: GFBio (https://www.gfbio.org)
Evaluation time: February 2016 (at that time the search index consisted of ~ 2 Mio datasets)
Eight domain experts rated the Top25 search results of 16 provided search questions on a 7-point-Likert scale from 0 (irrelevant) to 6 (highly relevant). Afterwards, we asked the users to provide and rate up to two own queries.
The users also rated 28 statements in a subsequent usability evaluation on a 5-point Likert scale from 'completely disagree' to 'highly agree'. For some statements, only binary ratings were given.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This is the dataset containing the results from the user study questionnaire conducted at the TU Delft as part of the Master Thesis "Bringing Formal Verification into Widespread Programming Language Ecosystems" by Sára Juhošová. The objective of the research was to investigate and improve the usability of a Agda2HS, a tool for transpiling programs written in Agda into Haskell. The data within this dataset was collected through an online form as part of the user study designed to evaluate the usability of Agda2HS after new features were added. The questions were a mixture of Agree-Disagree statements and supplementary open questions, directed at the participants' experience and impressions of Agda2HS. The participants filled in this questionnaire after implementing two programming assignments using Agda2HS.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The complete study material used for the usability study for the development of Data Cart.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Master Thesis for master Content and Knowledge Engineering at Utrecht UniversityResearch commissioned by SURFfoundation
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
article accepted and published in Archives of Advanced Engineering Science with title Assessment of Usability in Health Referral Queue Systems: A Business Process Model and Nielsen Heuristics Analysis
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Additional data related to usability evaluation of the ANON system.
The emerging field of environmental DNA (eDNA) research lacks universal guidelines for ensuring data produced are FAIR–findable, accessible, interoperable, and reusable–despite growing awareness of the importance of such practices. In order to better understand these data usability challenges, we systematically reviewed 60 peer-reviewed articles conducting a specific subset of eDNA research: metabarcoding studies in marine environments. For each article, we characterized approximately 90 features across several categories: general article attributes and topics, methodological choices, types of metadata included, and availability and storage of sequence data. Analyzing these characteristics, we identified several barriers to data accessibility, including a lack of common context and vocabulary across the articles, missing metadata, supplementary information limitations, and a concentration of both sample collection and analysis in the United States. While some of these barriers require s...
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains data collected during a study ("An Integrated Usability Framework for Evaluating Open Government Data Portals: Comparative Analysis of EU and GCC Countries") conducted by Fillip Molodtsov and Anastasija Nikiforova (University of Tartu).
It being made public both to act as supplementary data for the paper and in order for other researchers to use these data in their own work potentially contributing to the improvement of current data ecosystems and develop user-friendly, collaborative, robust, and sustainable open data portals.
Purpose of the study
This paper develops an integrated framework for evaluating OGD portal effectiveness that accommodates user diversity (regardless of their data literacy and language), evaluates collaboration and participation, and the ability of users to explore and understand the data provided through them.
The framework is validated by applying it to 33 national portals across European Union (EU) and Gulf Cooperation Council (GCC) countries, as a result of which we rank OGD portals, identify some good practices that lower-performing portals can learn from, and common shortcomings.
Methodology
(1) systematic literature review to establish a knowledge base and identify frameworks have been used to evaluate OGD portals, we conducted a systematic literature review - Dataset_ Usability_Framework_SLR;
(2) development of the Integrated Usability Framework for Evaluating Open Government Data Portals, which content is based on the outputs of the first step, along with selected articles of experts in portal design, and an exploratory assessment of the French, Irish, Estonian and Spanish portals - Dataset_Integrated_Usability_Framework;
(3) data collection, that is a completion of the protocol developed in the previous step by analysing 34 national OGD portals of the EU and GCC countries. When all individual protocols were collected, the total score are calculated using the weighting system. The average scores are calculated for the EU and GCC. The portals are ranked. The top portals (best performers) are determined for each dimension - Dataset_EU_GCC_OGDportal_Usability_results_clustering.
(4) identification of relationships and patterns among different portals based on their performance metrics as a result of the cluster analysis. By calculating the average dimensional scores of portals from both types of clusters, their performance across multiple dimensions is evaluated - Dataset_EU_GCC_OGDportal_Usability_results_clustering.
For more details see Molodtsov, F., Nikiforova, A. (2024). “An Integrated Usability Framework for Evaluating Open Government Data Portals: Comparative Analysis of EU and GCC Countries”. In Proceedings of the 25th Annual International Conference on Digital Government Research (DGO 2024), June 11--14, 2024, Taipei, Taiwan, 10.1145/3657054.3657159
Format of the file
.xls, .csv
Licenses or restrictions
CC-BY