100+ datasets found
  1. i

    Data and analysis of the avatar surveys

    • ieee-dataport.org
    Updated Jul 9, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ines Miguel Alonso (2024). Data and analysis of the avatar surveys [Dataset]. https://ieee-dataport.org/documents/data-and-analysis-avatar-surveys
    Explore at:
    Dataset updated
    Jul 9, 2024
    Authors
    Ines Miguel Alonso
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The data and analysis of the surveys to study the users' opinion about the presence of an avatar during a learning experience in Mixed Reality. Also there are demographic data and the open questions collected. This data was used in the paper Evaluating the Effectiveness of Avatar-Based Collaboration in XR for Pump Station Training Scenarios for the GeCon 2024 Conference.

  2. H

    Python Codes for Data Analysis of The Impact of COVID-19 on Technical...

    • dataverse.harvard.edu
    • figshare.com
    Updated Mar 21, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Elizabeth Szkirpan (2022). Python Codes for Data Analysis of The Impact of COVID-19 on Technical Services Units Survey Results [Dataset]. http://doi.org/10.7910/DVN/SXMSDZ
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Mar 21, 2022
    Dataset provided by
    Harvard Dataverse
    Authors
    Elizabeth Szkirpan
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Copies of Anaconda 3 Jupyter Notebooks and Python script for holistic and clustered analysis of "The Impact of COVID-19 on Technical Services Units" survey results. Data was analyzed holistically using cleaned and standardized survey results and by library type clusters. To streamline data analysis in certain locations, an off-shoot CSV file was created so data could be standardized without compromising the integrity of the parent clean file. Three Jupyter Notebooks/Python scripts are available in relation to this project: COVID_Impact_TechnicalServices_HolisticAnalysis (a holistic analysis of all survey data) and COVID_Impact_TechnicalServices_LibraryTypeAnalysis (a clustered analysis of impact by library type, clustered files available as part of the Dataverse for this project).

  3. D

    Data Analytics Market Report

    • marketresearchforecast.com
    doc, pdf, ppt
    Updated Dec 31, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Market Research Forecast (2024). Data Analytics Market Report [Dataset]. https://www.marketresearchforecast.com/reports/data-analytics-market-1787
    Explore at:
    doc, ppt, pdfAvailable download formats
    Dataset updated
    Dec 31, 2024
    Dataset authored and provided by
    Market Research Forecast
    License

    https://www.marketresearchforecast.com/privacy-policyhttps://www.marketresearchforecast.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The Data Analytics Market size was valued at USD 41.05 USD billion in 2023 and is projected to reach USD 222.39 USD billion by 2032, exhibiting a CAGR of 27.3 % during the forecast period. Data Analytics can be defined as the rigorous process of using tools and techniques within a computational framework to analyze various forms of data for the purpose of decision-making by the concerned organization. This is used in almost all fields such as health, money matters, product promotion, and transportation in order to manage businesses, foresee upcoming events, and improve customers’ satisfaction. Some of the principal forms of data analytics include descriptive, diagnostic, prognostic, as well as prescriptive analytics. Data gathering, data manipulation, analysis, and data representation are the major subtopics under this area. There are a lot of advantages of data analytics, and some of the most prominent include better decision making, productivity, and saving costs, as well as the identification of relationships and trends that people could be unaware of. The recent trends identified in the market include the use of AI and ML technologies and their applications, the use of big data, increased focus on real-time data processing, and concerns for data privacy. These developments are shaping and propelling the advancement and proliferation of data analysis functions and uses. Key drivers for this market are: Rising Demand for Edge Computing Likely to Boost Market Growth. Potential restraints include: Data Security Concerns to Impede the Market Progress . Notable trends are: Metadata-Driven Data Fabric Solutions to Expand Market Growth.

  4. D

    Data Analytics In Financial Market Report | Global Forecast From 2025 To...

    • dataintelo.com
    csv, pdf, pptx
    Updated Oct 16, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dataintelo (2024). Data Analytics In Financial Market Report | Global Forecast From 2025 To 2033 [Dataset]. https://dataintelo.com/report/data-analytics-in-financial-market
    Explore at:
    pptx, csv, pdfAvailable download formats
    Dataset updated
    Oct 16, 2024
    Dataset authored and provided by
    Dataintelo
    License

    https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy

    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Data Analytics in Financial Market Outlook



    The global data analytics in financial market size was valued at approximately USD 10.5 billion in 2023 and is projected to reach around USD 34.8 billion by 2032, growing at a robust CAGR of 14.4% during the forecast period. This remarkable growth is driven by the increasing adoption of advanced analytics technologies, the need for real-time data-driven decision-making, and the rising incidence of financial fraud.



    One of the primary growth factors for the data analytics in the financial market is the burgeoning volume of data generated from diverse sources such as transactions, social media, and online banking. Financial institutions are increasingly leveraging data analytics to process and analyze this vast amount of data to gain actionable insights. Additionally, technological advancements in artificial intelligence (AI) and machine learning (ML) are significantly enhancing the capabilities of data analytics tools, enabling more accurate predictions and efficient risk management.



    Another driving factor is the heightened focus on regulatory compliance and security management. In the wake of stringent regulations imposed by financial authorities globally, organizations are compelled to adopt robust analytics solutions to ensure compliance and mitigate risks. Moreover, with the growing threat of cyber-attacks and financial fraud, there is a heightened demand for sophisticated analytics tools capable of detecting and preventing fraudulent activities in real-time.



    Furthermore, the increasing emphasis on customer-centric strategies in the financial sector is fueling the adoption of data analytics. Financial institutions are utilizing analytics to understand customer behavior, preferences, and needs more accurately. This enables them to offer personalized services, improve customer satisfaction, and drive revenue growth. The integration of advanced analytics in customer management processes helps in enhancing customer engagement and loyalty, which is crucial in the competitive financial landscape.



    Regionally, North America has been the dominant player in the data analytics in financial market, owing to the presence of major market players, technological advancements, and a high adoption rate of analytics solutions. However, the Asia Pacific region is anticipated to witness the highest growth during the forecast period, driven by the rapid digitalization of financial services, increasing investments in analytics technologies, and the growing focus on enhancing customer experience in emerging economies like China and India.



    Component Analysis



    In the data analytics in financial market, the components segment is divided into software and services. The software segment encompasses various analytics tools and platforms designed to process and analyze financial data. This segment holds a significant share in the market owing to the continuous advancements in software capabilities and the growing need for real-time analytics. Financial institutions are increasingly investing in sophisticated software solutions to enhance their data processing and analytical capabilities. The software segment is also being propelled by the integration of AI and ML technologies, which offer enhanced predictive analytics and automation features.



    On the other hand, the services segment includes consulting, implementation, and maintenance services provided by vendors to help financial institutions effectively deploy and manage analytics solutions. With the rising complexity of financial data and analytics tools, the demand for professional services is on the rise. Organizations are seeking expert guidance to seamlessly integrate analytics solutions into their existing systems and optimize their use. The services segment is expected to grow significantly as more institutions recognize the value of professional support in maximizing the benefits of their analytics investments.



    The software segment is further categorized into various types of analytics tools such as descriptive analytics, predictive analytics, and prescriptive analytics. Descriptive analytics tools are used to summarize historical data to identify patterns and trends. Predictive analytics tools leverage historical data to forecast future outcomes, which is crucial for risk management and fraud detection. Prescriptive analytics tools provide actionable recommendations based on predictive analysis, aiding in decision-making processes. The growing need for advanced predictive and prescriptive analytics is driving the demand for specialized software solut

  5. Z

    Assessing the impact of hints in learning formal specification: Research...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jan 29, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Margolis, Iara (2024). Assessing the impact of hints in learning formal specification: Research artifact [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_10450608
    Explore at:
    Dataset updated
    Jan 29, 2024
    Dataset provided by
    Cunha, Alcino
    Sousa, Emanuel
    Margolis, Iara
    Macedo, Nuno
    Campos, José Creissac
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    This artifact accompanies the SEET@ICSE article "Assessing the impact of hints in learning formal specification", which reports on a user study to investigate the impact of different types of automated hints while learning a formal specification language, both in terms of immediate performance and learning retention, but also in the emotional response of the students. This research artifact provides all the material required to replicate this study (except for the proprietary questionnaires passed to assess the emotional response and user experience), as well as the collected data and data analysis scripts used for the discussion in the paper.

    Dataset

    The artifact contains the resources described below.

    Experiment resources

    The resources needed for replicating the experiment, namely in directory experiment:

    alloy_sheet_pt.pdf: the 1-page Alloy sheet that participants had access to during the 2 sessions of the experiment. The sheet was passed in Portuguese due to the population of the experiment.

    alloy_sheet_en.pdf: a version the 1-page Alloy sheet that participants had access to during the 2 sessions of the experiment translated into English.

    docker-compose.yml: a Docker Compose configuration file to launch Alloy4Fun populated with the tasks in directory data/experiment for the 2 sessions of the experiment.

    api and meteor: directories with source files for building and launching the Alloy4Fun platform for the study.

    Experiment data

    The task database used in our application of the experiment, namely in directory data/experiment:

    Model.json, Instance.json, and Link.json: JSON files with to populate Alloy4Fun with the tasks for the 2 sessions of the experiment.

    identifiers.txt: the list of all (104) available participant identifiers that can participate in the experiment.

    Collected data

    Data collected in the application of the experiment as a simple one-factor randomised experiment in 2 sessions involving 85 undergraduate students majoring in CSE. The experiment was validated by the Ethics Committee for Research in Social and Human Sciences of the Ethics Council of the University of Minho, where the experiment took place. Data is shared the shape of JSON and CSV files with a header row, namely in directory data/results:

    data_sessions.json: data collected from task-solving in the 2 sessions of the experiment, used to calculate variables productivity (PROD1 and PROD2, between 0 and 12 solved tasks) and efficiency (EFF1 and EFF2, between 0 and 1).

    data_socio.csv: data collected from socio-demographic questionnaire in the 1st session of the experiment, namely:

    participant identification: participant's unique identifier (ID);

    socio-demographic information: participant's age (AGE), sex (SEX, 1 through 4 for female, male, prefer not to disclosure, and other, respectively), and average academic grade (GRADE, from 0 to 20, NA denotes preference to not disclosure).

    data_emo.csv: detailed data collected from the emotional questionnaire in the 2 sessions of the experiment, namely:

    participant identification: participant's unique identifier (ID) and the assigned treatment (column HINT, either N, L, E or D);

    detailed emotional response data: the differential in the 5-point Likert scale for each of the 14 measured emotions in the 2 sessions, ranging from -5 to -1 if decreased, 0 if maintained, from 1 to 5 if increased, or NA denoting failure to submit the questionnaire. Half of the emotions are positive (Admiration1 and Admiration2, Desire1 and Desire2, Hope1 and Hope2, Fascination1 and Fascination2, Joy1 and Joy2, Satisfaction1 and Satisfaction2, and Pride1 and Pride2), and half are negative (Anger1 and Anger2, Boredom1 and Boredom2, Contempt1 and Contempt2, Disgust1 and Disgust2, Fear1 and Fear2, Sadness1 and Sadness2, and Shame1 and Shame2). This detailed data was used to compute the aggregate data in data_emo_aggregate.csv and in the detailed discussion in Section 6 of the paper.

    data_umux.csv: data collected from the user experience questionnaires in the 2 sessions of the experiment, namely:

    participant identification: participant's unique identifier (ID);

    user experience data: summarised user experience data from the UMUX surveys (UMUX1 and UMUX2, as a usability metric ranging from 0 to 100).

    participants.txt: the list of participant identifiers that have registered for the experiment.

    Analysis scripts

    The analysis scripts required to replicate the analysis of the results of the experiment as reported in the paper, namely in directory analysis:

    analysis.r: An R script to analyse the data in the provided CSV files; each performed analysis is documented within the file itself.

    requirements.r: An R script to install the required libraries for the analysis script.

    normalize_task.r: A Python script to normalize the task JSON data from file data_sessions.json into the CSV format required by the analysis script.

    normalize_emo.r: A Python script to compute the aggregate emotional response in the CSV format required by the analysis script from the detailed emotional response data in the CSV format of data_emo.csv.

    Dockerfile: Docker script to automate the analysis script from the collected data.

    Setup

    To replicate the experiment and the analysis of the results, only Docker is required.

    If you wish to manually replicate the experiment and collect your own data, you'll need to install:

    A modified version of the Alloy4Fun platform, which is built in the Meteor web framework. This version of Alloy4Fun is publicly available in branch study of its repository at https://github.com/haslab/Alloy4Fun/tree/study.

    If you wish to manually replicate the analysis of the data collected in our experiment, you'll need to install:

    Python to manipulate the JSON data collected in the experiment. Python is freely available for download at https://www.python.org/downloads/, with distributions for most platforms.

    R software for the analysis scripts. R is freely available for download at https://cran.r-project.org/mirrors.html, with binary distributions available for Windows, Linux and Mac.

    Usage

    Experiment replication

    This section describes how to replicate our user study experiment, and collect data about how different hints impact the performance of participants.

    To launch the Alloy4Fun platform populated with tasks for each session, just run the following commands from the root directory of the artifact. The Meteor server may take a few minutes to launch, wait for the "Started your app" message to show.

    cd experimentdocker-compose up

    This will launch Alloy4Fun at http://localhost:3000. The tasks are accessed through permalinks assigned to each participant. The experiment allows for up to 104 participants, and the list of available identifiers is given in file identifiers.txt. The group of each participant is determined by the last character of the identifier, either N, L, E or D. The task database can be consulted in directory data/experiment, in Alloy4Fun JSON files.

    In the 1st session, each participant was given one permalink that gives access to 12 sequential tasks. The permalink is simply the participant's identifier, so participant 0CAN would just access http://localhost:3000/0CAN. The next task is available after a correct submission to the current task or when a time-out occurs (5mins). Each participant was assigned to a different treatment group, so depending on the permalink different kinds of hints are provided. Below are 4 permalinks, each for each hint group:

    Group N (no hints): http://localhost:3000/0CAN

    Group L (error locations): http://localhost:3000/CA0L

    Group E (counter-example): http://localhost:3000/350E

    Group D (error description): http://localhost:3000/27AD

    In the 2nd session, likewise the 1st session, each permalink gave access to 12 sequential tasks, and the next task is available after a correct submission or a time-out (5mins). The permalink is constructed by prepending the participant's identifier with P-. So participant 0CAN would just access http://localhost:3000/P-0CAN. In the 2nd sessions all participants were expected to solve the tasks without any hints provided, so the permalinks from different groups are undifferentiated.

    Before the 1st session the participants should answer the socio-demographic questionnaire, that should ask the following information: unique identifier, age, sex, familiarity with the Alloy language, and average academic grade.

    Before and after both sessions the participants should answer the standard PrEmo 2 questionnaire. PrEmo 2 is published under an Attribution-NonCommercial-NoDerivatives 4.0 International Creative Commons licence (CC BY-NC-ND 4.0). This means that you are free to use the tool for non-commercial purposes as long as you give appropriate credit, provide a link to the license, and do not modify the original material. The original material, namely the depictions of the diferent emotions, can be downloaded from https://diopd.org/premo/. The questionnaire should ask for the unique user identifier, and for the attachment with each of the depicted 14 emotions, expressed in a 5-point Likert scale.

    After both sessions the participants should also answer the standard UMUX questionnaire. This questionnaire can be used freely, and should ask for the user unique identifier and answers for the standard 4 questions in a 7-point Likert scale. For information about the questions, how to implement the questionnaire, and how to compute the usability metric ranging from 0 to 100 score from the answers, please see the original paper:

    Kraig Finstad. 2010. The usability metric for user experience. Interacting with computers 22, 5 (2010), 323–327.

    Analysis of other applications of the experiment

    This section describes how to replicate the analysis of the data collected in an application of the experiment described in Experiment replication.

    The analysis script expects data in 4 CSV files,

  6. D

    Big Data Analysis Platform Market Report | Global Forecast From 2025 To 2033...

    • dataintelo.com
    csv, pdf, pptx
    Updated Jan 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dataintelo (2025). Big Data Analysis Platform Market Report | Global Forecast From 2025 To 2033 [Dataset]. https://dataintelo.com/report/global-big-data-analysis-platform-market
    Explore at:
    pptx, csv, pdfAvailable download formats
    Dataset updated
    Jan 7, 2025
    Dataset authored and provided by
    Dataintelo
    License

    https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy

    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Big Data Analysis Platform Market Outlook



    The global market size for Big Data Analysis Platforms is projected to grow from USD 35.5 billion in 2023 to an impressive USD 110.7 billion by 2032, reflecting a CAGR of 13.5%. This substantial growth can be attributed to the increasing adoption of data-driven decision-making processes across various industries, the rapid proliferation of IoT devices, and the ever-growing volumes of data generated globally.



    One of the primary growth factors for the Big Data Analysis Platform market is the escalating need for businesses to derive actionable insights from complex and voluminous datasets. With the advent of technologies such as artificial intelligence and machine learning, organizations are increasingly leveraging big data analytics to enhance their operational efficiency, customer experience, and competitiveness. The ability to process vast amounts of data quickly and accurately is proving to be a game-changer, enabling businesses to make more informed decisions, predict market trends, and optimize their supply chains.



    Another significant driver is the rise of digital transformation initiatives across various sectors. Companies are increasingly adopting digital technologies to improve their business processes and meet changing customer expectations. Big Data Analysis Platforms are central to these initiatives, providing the necessary tools to analyze and interpret data from diverse sources, including social media, customer transactions, and sensor data. This trend is particularly pronounced in sectors such as retail, healthcare, and BFSI (banking, financial services, and insurance), where data analytics is crucial for personalizing customer experiences, managing risks, and improving operational efficiencies.



    Moreover, the growing adoption of cloud computing is significantly influencing the market. Cloud-based Big Data Analysis Platforms offer several advantages over traditional on-premises solutions, including scalability, flexibility, and cost-effectiveness. Businesses of all sizes are increasingly turning to cloud-based analytics solutions to handle their data processing needs. The ability to scale up or down based on demand, coupled with reduced infrastructure costs, makes cloud-based solutions particularly appealing to small and medium-sized enterprises (SMEs) that may not have the resources to invest in extensive on-premises infrastructure.



    Data Science and Machine-Learning Platforms play a pivotal role in the evolution of Big Data Analysis Platforms. These platforms provide the necessary tools and frameworks for processing and analyzing vast datasets, enabling organizations to uncover hidden patterns and insights. By integrating data science techniques with machine learning algorithms, businesses can automate the analysis process, leading to more accurate predictions and efficient decision-making. This integration is particularly beneficial in sectors such as finance and healthcare, where the ability to quickly analyze complex data can lead to significant competitive advantages. As the demand for data-driven insights continues to grow, the role of data science and machine-learning platforms in enhancing big data analytics capabilities is becoming increasingly critical.



    From a regional perspective, North America currently holds the largest market share, driven by the presence of major technology companies, high adoption rates of advanced technologies, and substantial investments in data analytics infrastructure. Europe and the Asia Pacific regions are also experiencing significant growth, fueled by increasing digitalization efforts and the rising importance of data analytics in business strategy. The Asia Pacific region, in particular, is expected to witness the highest CAGR during the forecast period, propelled by rapid economic growth, a burgeoning middle class, and increasing internet and smartphone penetration.



    Component Analysis



    The Big Data Analysis Platform market can be broadly categorized into three components: Software, Hardware, and Services. The software segment includes analytics software, data management software, and visualization tools, which are crucial for analyzing and interpreting large datasets. This segment is expected to dominate the market due to the continuous advancements in analytics software and the increasing need for sophisticated data analysis tools. Analytics software enables organizations to process and analyze data from multiple sources,

  7. n

    Data from: Environmental impact assessment for large carnivores: a...

    • data.niaid.nih.gov
    • search.dataone.org
    • +2more
    zip
    Updated Apr 19, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Gonçalo Ferrão da Costa; Miguel Mascarenhas; Carlos Fonseca; Chris Sutherland (2024). Environmental impact assessment for large carnivores: a methodological review of the wolf (Canis lupus) monitoring in Portugal [Dataset]. http://doi.org/10.5061/dryad.t1g1jwt87
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 19, 2024
    Dataset provided by
    University of St Andrews
    BE Bioinsight & Ecoa
    University of Aveiro
    Authors
    Gonçalo Ferrão da Costa; Miguel Mascarenhas; Carlos Fonseca; Chris Sutherland
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Area covered
    Portugal
    Description

    The continuous growth of the global human population results in increased use and change of landscapes, with infrastructures like transportation or energy facilities, being a particular risk to large carnivores. Environmental Impact Assessments were established to identify the probable environmental consequences of any new proposed project, find ways to reduce impacts, and provide evidence to inform decision making and mitigation. Portugal has a wolf population of around 300 individuals, designated as an endangered species with full legal protection. They occupy the northern mountainous areas of the country which has also been the focus of new human infrastructures over the last 20 years. Consequently, dozens of wolf monitoring programs have been established to evaluate wolf population status, to identify impacts, and to inform appropriate mitigation or compensation measures. We reviewed Portuguese wolf monitoring programs to answer four key questions: do wolf programs examine adequate biological parameters to meet monitoring objectives? is the study design suitable for measuring impacts? are data collection methods and effort sufficient for the stated inference objectives? and do statistical analyses of the data lead to robust conclusions? Overall, we found a mismatch between the stated aims of wolf monitoring and the results reported, and often neither aligns with the existing national wolf monitoring guidelines. Despite the vast effort expended and the diversity of methods used, data analysis makes almost exclusive use of relative indices or summary statistics, with little consideration of the potential biases that arise through the (imperfect) observational process. This makes comparisons of impacts across space and time difficult and is therefore unlikely to contribute to a general understanding of wolf responses to infrastructure-related disturbance. We recommend the development of standardized monitoring protocols and advocate for the use of statistical methods that account for imperfect detection to guarantee accuracy, reproducibility, and efficacy of the programs. Methods We reviewed all major wolf monitoring programs developed for environmental impact assessments in Portugal since 2002 (Table S1, Supplementary material). Given that the focus here is on the adequacy of targeted wolf monitoring for delivering conclusions about the effects of infrastructure development, we reviewed only monitoring programs that were specifically designed for wolves and not those concerned with general mammalian assessment. The starting point was a compilation from the 2019-2021 National Wolf Census (Pimenta et al., 2023), where every wolf monitoring program that occurred between 2014 and 2019 in Portugal was identified. The list was completed with projects that started before 2014 or after 2019 based on personal knowledge, inquires to principal scientific teams, governmental agencies, and EIA consultants. Depending on duration, wolf monitoring programs can produce several, usually annual, reports that are not peer-reviewed and do not appear on standard search engines (e.g., Web of Science or Google Schoolar) but are publicly available from the Portuguese Environmental Agency (APA – www.apambiente.pt). We conducted an online search on APA´s search engine (https://siaia.apambiente.pt/) and identified a total of 30 projects. For each of these projects, we were interested in the first and the last report to identify any methodological changes. If the last report was not present, we reviewed the most recent one. If no report was present, we requested it from the team responsible. Our investigation centred on characterizing and quantifying four components of wolf monitoring programs that are interlinked and that should be ideally determined by the initial objectives: (1) biological parameters, i.e., what wolf parameters were studied to assess impacts; (2) study design, i.e., what sampling schemes were followed to collect and analyse data; (3) data collection, i.e., which sampling methodology and how much effort was used to collect data; and (4) data analysis, i.e., how data were analysed to estimate relevant parameters and assess impact. Biological parameters were identified and classified under two categories: occurrence and demography, which broadly correspond to the necessary inputs to assess impacts like exclusion effect and changes in reproductive patterns. Occurrence-related parameters refer to variables used to measure the presence or absence of wolves, whereas demographic parameters refer to variables that intend to measure population-level effects such as abundance, density, survival, or reproduction. We also recorded whether any effort was made to quantify prey population distribution or abundance as recommended in the guidelines. For study design, we reviewed the sampling design of the project, with specific focus on the spatial and temporal aspect of the study such as total area surveyed, the definition of a sampling site within this region (i.e., resolution), the duration of the study and the number of sampling seasons. The goal here was to determine whether the sampling scheme used was appropriate for assessing infrastructure impacts on wolf distribution or demography, depending on what the focus was. For data collection, we identified the main data collection methodologies used and the corresponding sampling effort. By far the most frequent method used is sign surveys, and specifically scat surveys, and for these studies we recorded whether genetic identification of species or individuals based on faecal DNA was attempted. We compare how sampling effort varies by the various inference objectives and, as above, assess which, if any, project or data collection approach is most likely to produce evidence of impact. We divided the Analysis component into two groups: single-year and multi-year analyses. For single-year analysis we identified how monitoring projects used data to make inferences about the state biological parameters of interest and discuss the associated strengths and weaknesses. For multi-year analyses, we recorded how differences or trends were quantified and associated with infrastructure impacts, commenting on the statistical robustness of the analyses used across the projects.

  8. g

    SAS code used to analyze data and a datafile with metadata glossary |...

    • gimi9.com
    Updated Dec 28, 2016
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2016). SAS code used to analyze data and a datafile with metadata glossary | gimi9.com [Dataset]. https://gimi9.com/dataset/data-gov_sas-code-used-to-analyze-data-and-a-datafile-with-metadata-glossary
    Explore at:
    Dataset updated
    Dec 28, 2016
    Description

    We compiled macroinvertebrate assemblage data collected from 1995 to 2014 from the St. Louis River Area of Concern (AOC) of western Lake Superior. Our objective was to define depth-adjusted cutoff values for benthos condition classes (poor, fair, reference) to provide tool useful for assessing progress toward achieving removal targets for the degraded benthos beneficial use impairment in the AOC. The relationship between depth and benthos metrics was wedge-shaped. We therefore used quantile regression to model the limiting effect of depth on selected benthos metrics, including taxa richness, percent non-oligochaete individuals, combined percent Ephemeroptera, Trichoptera, and Odonata individuals, and density of ephemerid mayfly nymphs (Hexagenia). We created a scaled trimetric index from the first three metrics. Metric values at or above the 90th percentile quantile regression model prediction were defined as reference condition for that depth. We set the cutoff between poor and fair condition as the 50th percentile model prediction. We examined sampler type, exposure, geographic zone of the AOC, and substrate type for confounding effects. Based on these analyses we combined data across sampler type and exposure classes and created separate models for each geographic zone. We used the resulting condition class cutoff values to assess the relative benthic condition for three habitat restoration project areas. The depth-limited pattern of ephemerid abundance we observed in the St. Louis River AOC also occurred elsewhere in the Great Lakes. We provide tabulated model predictions for application of our depth-adjusted condition class cutoff values to new sample data. This dataset is associated with the following publication: Angradi, T., W. Bartsch, A. Trebitz, V. Brady, and J. Launspach. A depth-adjusted ambient distribution approach for setting numeric removal targets for a Great Lakes Area of Concern beneficial use impairment: Degraded benthos. JOURNAL OF GREAT LAKES RESEARCH. International Association for Great Lakes Research, Ann Arbor, MI, USA, 43(1): 108-120, (2017).

  9. Z

    Data used in the paper_ Analyzing innovation in museums through qualitative...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jul 21, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    de-Miguel-Molina, Maria (2022). Data used in the paper_ Analyzing innovation in museums through qualitative comparative analysis [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_3456878
    Explore at:
    Dataset updated
    Jul 21, 2022
    Dataset provided by
    de-Miguel-Molina, Blanca
    de-Miguel-Molina, Maria
    Boix Domenech, Rafael
    License

    Attribution-NonCommercial 2.5 (CC BY-NC 2.5)https://creativecommons.org/licenses/by-nc/2.5/
    License information was derived automatically

    Description

    Data used in the three models of the paper published in Knowledge Management Research & Practice (https://www.tandfonline.com/doi/abs/10.1080/14778238.2019.1601505?journalCode=tkmr20).

    Please, cite as follows:

    Blanca de Miguel Molina, Rafael Boix Domenech & María de Miguel Molina (2019) Analysing innovation in museums through qualitative comparative analysis, Knowledge Management Research & Practice, 17:2, 213-226, DOI: 10.1080/14778238.2019.1601505

  10. C

    Analysis of the accuracy of the inverse marching method used to determine...

    • pk.rodbuk.pl
    txt, zip
    Updated Mar 25, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Magdalena Jaremkiewicz; Magdalena Jaremkiewicz (2025). Analysis of the accuracy of the inverse marching method used to determine thermal stresses in cylindrical pressure components - research data [Dataset]. http://doi.org/10.58099/PK/ME3D8A
    Explore at:
    txt(2879), zip(2608440)Available download formats
    Dataset updated
    Mar 25, 2025
    Dataset provided by
    Cracow University of Technology
    Authors
    Magdalena Jaremkiewicz; Magdalena Jaremkiewicz
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    The collection contains data to analyse the accuracy of the inverse marching method used to determine thermal stresses in cylindrical pressure components.

  11. Dataset analysing the crossover between archivists, recordkeeping...

    • figshare.com
    xlsx
    Updated Aug 29, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rebecca Grant (2018). Dataset analysing the crossover between archivists, recordkeeping professionals and research data management using email list data [Dataset]. http://doi.org/10.6084/m9.figshare.7007903.v1
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Aug 29, 2018
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Rebecca Grant
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset relates to research on the connections between archives professionals and research data management. It consists of a single Excel spreadsheet with four sheets, containing an analysis of emails sent to two email discussions lists: Archives-NRA (Archivists, conservators and records managers) and Research-Dataman. The coded dataset and a list of codes used for each mailing list is provided.The two datasets were downloaded from the JiscMail Email Discussion list archives on 27 July 2018. The Archives-NRA dataset was compiled by conducting a free text search for "research data" on the mailing list's archives, and the metadata for every search result was downloaded and coded (144 metadata records in total). The resulting coded dataset demonstrates how frequently archivists and records professionals discuss research data on the Archives-NRA list, the topics which are discussed, and an increase in these discussions over time. The Research-Dataman dataset was compiled by conducting a free text search for "archivist" on the mailing list's archives, and the metadata for every search result was downloaded and coded (197 emails total). The resulting coded dataset demonstrates how frequently data management professionals seek the advice of archivists or advertise vacancies for archivists, and how often archivists email this mailing list. The names and email addresses of the mailing list participants have been redacted for privacy reasons but the original full-text emails can be accessed by members of the respective mailing lists using the URLs provided in the dataset.

  12. Just Dance @ YouTube: Multi-label Text + Analytics

    • kaggle.com
    Updated Jan 24, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Renato Santos (2022). Just Dance @ YouTube: Multi-label Text + Analytics [Dataset]. https://www.kaggle.com/datasets/renatojmsantos/just-dance-on-youtube/data
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jan 24, 2022
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Renato Santos
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    YouTube
    Description

    Context

    With the growth of social media and the spread of the Internet, the user's opinion become accessible in public forums. It became then possible to analyse and extract knowledge based on the textual data published by users, through the application of Natural Language Processing and Text Mining techniques. In this dissertation, these techniques are used to, based on comments posted by users on YouTube, extract information about Usability, User Experience (UX), and Perceived Health Impacts related to Quality of Life (H-QoL). This analysis focus on videos about the Just Dance series, one of the most popular interactive dance video games.

    Just Dance belongs in a category of games whose purpose goes beyond entertainment - serious games - among which there is a specific type of games, exergames, which aim is to promote physical activity. Despite their positive influence on the health of their users, these often stop playing after a short period of time, leading to the loss of benefits in the medium and long term. It is in this context that the need to better understand the experience and opinions of players arises, especially how they feel and how they like to interact, so that the knowledge generated can be used to redesign games, so that these can increasingly address the preferences of end-users.

    It is with this purpose, that in a serious game it is necessary to assure not only the fundamental characteristics of the functioning system, but also to provide the best possible experience and, at the same time, to understand if these positively impact players' lives. In this way, this work analyses three dimensions, observing, besides Usability and UX aspects, also H-QoL, in the corpus extrated.

    To meet the objectives, a tool was developed that extracts information from user comments on YouTube, a social media network that despite being one of the most popular, still has been little explored as a source for opinion mining. To extract information about Usability, UX and H-QoL, a pre-established vocabulary was used with an approach based on the lexicon of the English idiom and its semantic relations. In this way, the presence of 38 concepts (five of Usability, 18 of UX, and 15 of H-QoL) was annotated, and the sentiment of each comment was also analysed. Given the lack of a vocabulary that allowed for the analysis of the dimension related to H-QoL, the concepts identified in the World Health Organization's WHOQOL-100 questionnaire were validated for user opinion mining purposes with ten specialists in the Health and Quality of Life domains.

    The results of the information extration are displayed in a public dashboard that allows visitors to explore and analyse the existing data. Until the moment of this work, 543 405 comments were collected from 32 158 videos, in which about 52% contain information related to the three dimensions. The performance of this annotation process, as measured through human validation with eight collaborators, obtained an general efficacy of 85%.

    Content

    There are three datasets related with Just Dance game on YouTube, with: - All the user's comments extracted, with some informations about them and with sentiment analysis - Analytics collected from YouTube, related with comments, videos and channels - All the data analyzed in the work, with the annotation of the 38 concepts under study

    Project

    Developed by Renato Santos in the context of the Master Degree in Informatics Engineering, DEI-FCTUC, dissertation titled "Analysing Usability, User Experience, and Perceived Health Impacts related to Quality of Life based on Users' Opinion Mining", under the supervision of Paula Alexandra Silva and Joel Perdiz Arrais.

    More information

    Check more about this project: https://linktr.ee/justdanceproject

    Contact

    If you have any questions or suggestions, please e-mail us on renatojms@student.dei.uc.pt

  13. q

    Analyzing Data on Behavioral and Immunological Effects of Inflammation in...

    • qubeshub.org
    Updated Aug 20, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Cynthia Downs (2024). Analyzing Data on Behavioral and Immunological Effects of Inflammation in Mice [Dataset]. http://doi.org/10.25334/38E6-SQ03
    Explore at:
    Dataset updated
    Aug 20, 2024
    Dataset provided by
    QUBES
    Authors
    Cynthia Downs
    Description

    In this activity, students will learn about behavioral and immunological responses to a bacterial infection. The activity emphasized the host’s responses rather than bacterial manipulation of the host. In the activity, students will apply core concepts and competencies from Vision & Change (https://visionandchange.org/). The activity uses a hands-on data analysis approach for completing an assignment, either in class or as homework. At the end of this activity, students write up a lab report and/or answer questions related to the activity. The “in class” part of the study was designed to be completed in a 3-hour period.

  14. U

    Meteorological and evaporation data used in a water-budget analysis of the...

    • data.usgs.gov
    • catalog.data.gov
    Updated Nov 18, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Richard Slattery; Nam (Namjeong) (2024). Meteorological and evaporation data used in a water-budget analysis of the Medina and Diversion Lake system [Dataset]. http://doi.org/10.5066/P14N5SNK
    Explore at:
    Dataset updated
    Nov 18, 2024
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Authors
    Richard Slattery; Nam (Namjeong)
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Time period covered
    Mar 3, 2017 - Oct 13, 2022
    Area covered
    Diversion Lake
    Description

    This data set contains meteorological and evaporation data collected at the Medina Lake meteorological station (U.S. Geological Survey station 293355098560601) during March 2017–October 2022. These data include 30-minute averages of air pressure, air temperature, relative humidity, vapor pressure, wind speed and direction, and evaporation. The daily total evaporation measured at the U.S. Geological Survey meteorological station was compared with the daily total evaporation from the Texas Water Development Board to obtain a corrected daily total of evaporation. The corrected daily total evaporation was used as one of the measurable terms in the water-budget analysis of the Medina and Diversion Lake system.

  15. n

    Data from: National citation patterns of NEJM, The Lancet, JAMA and The BMJ...

    • data.niaid.nih.gov
    • search.dataone.org
    • +1more
    zip
    Updated Sep 25, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Gonzalo Casino; Roser Rius; Erik Cobo (2017). National citation patterns of NEJM, The Lancet, JAMA and The BMJ in the lay press: a quantitative content analysis [Dataset]. http://doi.org/10.5061/dryad.bh576
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 25, 2017
    Dataset provided by
    Department of Communications and the Arts
    Department of Statistics and Operations Research
    Authors
    Gonzalo Casino; Roser Rius; Erik Cobo
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    Objectives: To analyse the total number of newspaper articles citing the four leading general medical journals and to describe national citation patterns. Design: Quantitative content analysis Setting/sample: Full text of 22 general newspapers in 14 countries over the period 2008-2015, collected from LexisNexis. The 14 countries have been categorized into four regions: US, UK, Western World (EU countries other than UK, and Australia, New Zealand and Canada) and Rest of the World (other countries). Main outcome measure: Press citations of four medical journals (two American: NEJM and JAMA; and two British: The Lancet and The BMJ) in 22 newspapers. Results: British and American newspapers cited some of the four analysed medical journals about three times a week in 2008-2015 (weekly mean 3.2 and 2.7 citations respectively); the newspapers from other Western countries did so about once a week (weekly mean 1.1), and those from the Rest of the World cited them about once a month (monthly mean 1.1). The New York Times cited above all other newspapers (weekly mean 4.7). The analysis showed the existence of three national citation patterns in the daily press: American newspapers cited mostly American journals (70.0% of citations), British newspapers cited mostly British journals (86.5%), and the rest of the analysed press cited more British journals than American ones. The Lancet was the most cited journal in the press of almost all Western countries outside the US and the UK. Multivariate correspondence analysis confirmed the national patterns and showed that over 85% of the citation data variability is retained in just one single new variable: the national dimension. Conclusion: British and American newspapers are the ones that cite the four analysed medical journals more often, showing a domestic preference for their respective national journals; non-British and non-American newspapers show a common international citation pattern.

  16. f

    Health research evidence-analysed data.-analyzed

    • datasetcatalog.nlm.nih.gov
    • figshare.com
    Updated Jul 23, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mageda, Kihulya; Kalolo, Albino; Mongi, Richard John; Kagoma, Pius (2024). Health research evidence-analysed data.-analyzed [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0001354605
    Explore at:
    Dataset updated
    Jul 23, 2024
    Authors
    Mageda, Kihulya; Kalolo, Albino; Mongi, Richard John; Kagoma, Pius
    Description

    This research intended to analyze the current usage of health research evidence in health planning, determinants, and readiness to use knowledge translation tools among planning teams in Tanzania. Specifically, the study aims to 1) analyze the current usage of health research evidence among planning team members at the regional and council levels, 2) analyze the capability for the use of health research evidence among planning team members at regional and council levels, 3) analyze the opportunities for the use of health research evidence among health planning members at regional and council levels, 4) to identify the motivations for the use of health research evidence among health planning team members at regional and council levels, and 5) to assess the readiness of the planning team members on the use of knowledge translation tools. The study employed an exploratory mixed-method study design. It was conducted in nine (9) regions and eighteen (18) Councils of Tanzania Mainland involving the health planning team members.

  17. g

    Road tractors by province, fuel, type and category Euro - ACI data and...

    • gimi9.com
    Updated Dec 18, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Road tractors by province, fuel, type and category Euro - ACI data and statistics | gimi9.com [Dataset]. https://gimi9.com/dataset/eu_trattoristradalidistintiperprovinciaalimentazionetipologiaenormativaeuro_19422_6/
    Explore at:
    Dataset updated
    Dec 18, 2024
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Statistical analysis of the vehicle fleet for road tractors by province, fuel, type and Euro category. Vehicle fleet statistics are prepared from data in open format made available on the portal by the Automobile Club of Italy (ACI - https://www.aci.it/laci/studies-and-research/data-e-statistics/self-portrait.html). The data were cut out for the Piedmont Region only and completed with the provincial ISTAT codes, for easy reading and possible comparison with other datasets. The data, taken from the Ente’s archives and from the analysis of the vehicle fleet, in this case in Piedmont, may be of interest both to the world of the economy and the environment, both for land management and for surveys of a social nature. The publication shall take place by the ACI by October of each year with data relating to 31 December of the previous year.

  18. e

    Secondary Data Analysis of the Socio-Economic Panel Study and the...

    • b2find.eudat.eu
    Updated Nov 24, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2023). Secondary Data Analysis of the Socio-Economic Panel Study and the Cross-National Equivalent File, 2016-2020 - Dataset - B2FIND [Dataset]. https://b2find.eudat.eu/dataset/38c2bdd7-3175-5229-afaf-aef6dd11bd06
    Explore at:
    Dataset updated
    Nov 24, 2023
    Description

    The data comprises three of the Cross-National Equivalent Files. The Panel Study of Income Dynamics (1970-2013) ; the German Socio-Economic Panel Study (1984-2015) and the UKHLS (2009-2014) and the British Household Panel Study (1991-2009). The following variables were extracted: personal identifier (x11101LL), household identifier (x11102), survey year (year), sex (d11101LL), marital status (d11104), income (i11110), employment status (e11101), hour worked (e11101), education (d11108/9), partner identifier (d11105), household size (d11106) and number of children (d11107). The data came in a harmonized form from the data providers. For the papers on Germany, in addition to the variables described above, life satisfaction, work hour flexibility, caregiving, housework hours, widowhood status and carer ID were further extracted from the original German Socio-Economic Panel Study.Longitudinal research has mainly focussed on women's problems in maintaining a career. However, mothers in a relationship are the ones who struggle because fathers often rely on their unpaid work efforts to maintain a career (Blossfeld and Drobnic, 2001, Gershuny, 2000). This suggests that couple-level shifts in the division of labour upon entering parenthood are at the heart of the problem of gender inequalities in life course career investment. Women who enter parenthood are also the majority of the UK female population (Office for National Statistics, 2012). Hence, this project's main goal is to understand how couples' careers are interrelated across their lives. The second goal is to analyse which factors are associated with dual career failure and dual career success. No primary data was collected, see related resources for data used.

  19. e

    Research Data for Analyzing the Business Model of Free-to-Play Games on PC...

    • scholar-3h97c.lolm.eu.org
    csv, json
    Updated Jul 18, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dr. Amanda Evans (2025). Research Data for Analyzing the Business Model of Free-to-Play Games on PC and Consoles [Dataset]. http://doi.org/10.1069/83v2h17083034-data
    Explore at:
    csv, jsonAvailable download formats
    Dataset updated
    Jul 18, 2025
    Authors
    Dr. Amanda Evans
    Variables measured
    Variable A, Variable B, Variable C, Correlation Index, Statistical Significance
    Description

    Complete dataset used in the research study on Analyzing the Business Model of Free-to-Play Games on PC and Consoles by Dr. Amanda Evans

  20. H

    Replication Data for: Mind the Context! The Role of Theoretical Concepts for...

    • dataverse.harvard.edu
    Updated Aug 7, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jan Schwalbach (2024). Replication Data for: Mind the Context! The Role of Theoretical Concepts for Analyzing Legislative Text Data [Dataset]. http://doi.org/10.7910/DVN/BGYW08
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Aug 7, 2024
    Dataset provided by
    Harvard Dataverse
    Authors
    Jan Schwalbach
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    With the progressing advances in text analysis methods and the increasing accessibility of parliamentary documents, the range of available tools for legislative scholars has increased massively over the past years. While the potential for comparative studies is huge, researchers can easily overlook the pitfalls associated with analyzing these documents. Against this background, I asses which theoretical considerations need to be carefully thought through preceding any (legislative) text analysis: I show that a clear definition and conceptualization of the unit of analysis such as speech or bill can vary substantially depending on the research interest. Furthermore, I discuss how the nested structure of legislative behavior and the data generating process influence our theoretic assumptions about parliamentary behavior. Based on these concepts, I derive some recommendations for the theoretical approach to quantitative textual analyses of parliamentary documents.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Ines Miguel Alonso (2024). Data and analysis of the avatar surveys [Dataset]. https://ieee-dataport.org/documents/data-and-analysis-avatar-surveys

Data and analysis of the avatar surveys

Explore at:
Dataset updated
Jul 9, 2024
Authors
Ines Miguel Alonso
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

The data and analysis of the surveys to study the users' opinion about the presence of an avatar during a learning experience in Mixed Reality. Also there are demographic data and the open questions collected. This data was used in the paper Evaluating the Effectiveness of Avatar-Based Collaboration in XR for Pump Station Training Scenarios for the GeCon 2024 Conference.

Search
Clear search
Close search
Google apps
Main menu