1 dataset found
  1. Transparency in Measurement Reporting: A Systematic Literature Review of CHI...

    • osf.io
    Updated Dec 13, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lena Aeschbach; Sebastian Perrig; Lorena Weder; Klaus Opwis; Florian Brühlmann (2024). Transparency in Measurement Reporting: A Systematic Literature Review of CHI PLAY [Dataset]. http://doi.org/10.31234/osf.io/69eqn
    Explore at:
    Dataset updated
    Dec 13, 2024
    Dataset provided by
    Center for Open Sciencehttps://cos.io/
    Authors
    Lena Aeschbach; Sebastian Perrig; Lorena Weder; Klaus Opwis; Florian Brühlmann
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Measuring theoretical concepts, so-called constructs, is a central challenge of Player Experience research. Building on recent work in HCI and psychology, we conducted a systematic literature review to study the transparency of measurement reporting. We accessed the ACM Digital Library to analyze all 48 full papers published at CHI PLAY 2020, of those, 24 papers used self-report measurements and were included in the full review. We assessed specifically, whether researchers reported What, How and Why they measured. We found that researchers matched their measures to the construct under study and that administrative details, such as number of points on a Likert-type scale, were frequently reported. However, definitions of the constructs to be measured and justifications for selecting a particular scale were sparse. Lack of transparency in these areas threaten the validity of singular studies, but further compromise the building of theories and accumulation of research knowledge in meta-analytic work. This work is limited to only assessing the current transparency of measurement reporting at CHI PLAY 2020, however we argue this constitutes a fair foundation to asses potential pitfalls. To address these pitfalls, we propose a prescriptive model of a measurement selection process, which aids researchers to systematically define their constructs, specify operationalizations, and justify why these measures were chosen. Future research employing this model should contribute to more transparency in measurement reporting. The research was funded through internal resources. All materials are available on https://osf.io/4xz2v/.

  2. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Lena Aeschbach; Sebastian Perrig; Lorena Weder; Klaus Opwis; Florian Brühlmann (2024). Transparency in Measurement Reporting: A Systematic Literature Review of CHI PLAY [Dataset]. http://doi.org/10.31234/osf.io/69eqn
Organization logo

Transparency in Measurement Reporting: A Systematic Literature Review of CHI PLAY

Explore at:
Dataset updated
Dec 13, 2024
Dataset provided by
Center for Open Sciencehttps://cos.io/
Authors
Lena Aeschbach; Sebastian Perrig; Lorena Weder; Klaus Opwis; Florian Brühlmann
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

Measuring theoretical concepts, so-called constructs, is a central challenge of Player Experience research. Building on recent work in HCI and psychology, we conducted a systematic literature review to study the transparency of measurement reporting. We accessed the ACM Digital Library to analyze all 48 full papers published at CHI PLAY 2020, of those, 24 papers used self-report measurements and were included in the full review. We assessed specifically, whether researchers reported What, How and Why they measured. We found that researchers matched their measures to the construct under study and that administrative details, such as number of points on a Likert-type scale, were frequently reported. However, definitions of the constructs to be measured and justifications for selecting a particular scale were sparse. Lack of transparency in these areas threaten the validity of singular studies, but further compromise the building of theories and accumulation of research knowledge in meta-analytic work. This work is limited to only assessing the current transparency of measurement reporting at CHI PLAY 2020, however we argue this constitutes a fair foundation to asses potential pitfalls. To address these pitfalls, we propose a prescriptive model of a measurement selection process, which aids researchers to systematically define their constructs, specify operationalizations, and justify why these measures were chosen. Future research employing this model should contribute to more transparency in measurement reporting. The research was funded through internal resources. All materials are available on https://osf.io/4xz2v/.

Search
Clear search
Close search
Google apps
Main menu