6 datasets found
  1. Z

    Assessing the impact of hints in learning formal specification: Research...

    • data.niaid.nih.gov
    • data-staging.niaid.nih.gov
    Updated Jan 29, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Macedo, Nuno; Cunha, Alcino; Campos, José Creissac; Sousa, Emanuel; Margolis, Iara (2024). Assessing the impact of hints in learning formal specification: Research artifact [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_10450608
    Explore at:
    Dataset updated
    Jan 29, 2024
    Dataset provided by
    Centro de Computação Gráfica
    INESC TEC
    Authors
    Macedo, Nuno; Cunha, Alcino; Campos, José Creissac; Sousa, Emanuel; Margolis, Iara
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    This artifact accompanies the SEET@ICSE article "Assessing the impact of hints in learning formal specification", which reports on a user study to investigate the impact of different types of automated hints while learning a formal specification language, both in terms of immediate performance and learning retention, but also in the emotional response of the students. This research artifact provides all the material required to replicate this study (except for the proprietary questionnaires passed to assess the emotional response and user experience), as well as the collected data and data analysis scripts used for the discussion in the paper.

    Dataset

    The artifact contains the resources described below.

    Experiment resources

    The resources needed for replicating the experiment, namely in directory experiment:

    alloy_sheet_pt.pdf: the 1-page Alloy sheet that participants had access to during the 2 sessions of the experiment. The sheet was passed in Portuguese due to the population of the experiment.

    alloy_sheet_en.pdf: a version the 1-page Alloy sheet that participants had access to during the 2 sessions of the experiment translated into English.

    docker-compose.yml: a Docker Compose configuration file to launch Alloy4Fun populated with the tasks in directory data/experiment for the 2 sessions of the experiment.

    api and meteor: directories with source files for building and launching the Alloy4Fun platform for the study.

    Experiment data

    The task database used in our application of the experiment, namely in directory data/experiment:

    Model.json, Instance.json, and Link.json: JSON files with to populate Alloy4Fun with the tasks for the 2 sessions of the experiment.

    identifiers.txt: the list of all (104) available participant identifiers that can participate in the experiment.

    Collected data

    Data collected in the application of the experiment as a simple one-factor randomised experiment in 2 sessions involving 85 undergraduate students majoring in CSE. The experiment was validated by the Ethics Committee for Research in Social and Human Sciences of the Ethics Council of the University of Minho, where the experiment took place. Data is shared the shape of JSON and CSV files with a header row, namely in directory data/results:

    data_sessions.json: data collected from task-solving in the 2 sessions of the experiment, used to calculate variables productivity (PROD1 and PROD2, between 0 and 12 solved tasks) and efficiency (EFF1 and EFF2, between 0 and 1).

    data_socio.csv: data collected from socio-demographic questionnaire in the 1st session of the experiment, namely:

    participant identification: participant's unique identifier (ID);

    socio-demographic information: participant's age (AGE), sex (SEX, 1 through 4 for female, male, prefer not to disclosure, and other, respectively), and average academic grade (GRADE, from 0 to 20, NA denotes preference to not disclosure).

    data_emo.csv: detailed data collected from the emotional questionnaire in the 2 sessions of the experiment, namely:

    participant identification: participant's unique identifier (ID) and the assigned treatment (column HINT, either N, L, E or D);

    detailed emotional response data: the differential in the 5-point Likert scale for each of the 14 measured emotions in the 2 sessions, ranging from -5 to -1 if decreased, 0 if maintained, from 1 to 5 if increased, or NA denoting failure to submit the questionnaire. Half of the emotions are positive (Admiration1 and Admiration2, Desire1 and Desire2, Hope1 and Hope2, Fascination1 and Fascination2, Joy1 and Joy2, Satisfaction1 and Satisfaction2, and Pride1 and Pride2), and half are negative (Anger1 and Anger2, Boredom1 and Boredom2, Contempt1 and Contempt2, Disgust1 and Disgust2, Fear1 and Fear2, Sadness1 and Sadness2, and Shame1 and Shame2). This detailed data was used to compute the aggregate data in data_emo_aggregate.csv and in the detailed discussion in Section 6 of the paper.

    data_umux.csv: data collected from the user experience questionnaires in the 2 sessions of the experiment, namely:

    participant identification: participant's unique identifier (ID);

    user experience data: summarised user experience data from the UMUX surveys (UMUX1 and UMUX2, as a usability metric ranging from 0 to 100).

    participants.txt: the list of participant identifiers that have registered for the experiment.

    Analysis scripts

    The analysis scripts required to replicate the analysis of the results of the experiment as reported in the paper, namely in directory analysis:

    analysis.r: An R script to analyse the data in the provided CSV files; each performed analysis is documented within the file itself.

    requirements.r: An R script to install the required libraries for the analysis script.

    normalize_task.r: A Python script to normalize the task JSON data from file data_sessions.json into the CSV format required by the analysis script.

    normalize_emo.r: A Python script to compute the aggregate emotional response in the CSV format required by the analysis script from the detailed emotional response data in the CSV format of data_emo.csv.

    Dockerfile: Docker script to automate the analysis script from the collected data.

    Setup

    To replicate the experiment and the analysis of the results, only Docker is required.

    If you wish to manually replicate the experiment and collect your own data, you'll need to install:

    A modified version of the Alloy4Fun platform, which is built in the Meteor web framework. This version of Alloy4Fun is publicly available in branch study of its repository at https://github.com/haslab/Alloy4Fun/tree/study.

    If you wish to manually replicate the analysis of the data collected in our experiment, you'll need to install:

    Python to manipulate the JSON data collected in the experiment. Python is freely available for download at https://www.python.org/downloads/, with distributions for most platforms.

    R software for the analysis scripts. R is freely available for download at https://cran.r-project.org/mirrors.html, with binary distributions available for Windows, Linux and Mac.

    Usage

    Experiment replication

    This section describes how to replicate our user study experiment, and collect data about how different hints impact the performance of participants.

    To launch the Alloy4Fun platform populated with tasks for each session, just run the following commands from the root directory of the artifact. The Meteor server may take a few minutes to launch, wait for the "Started your app" message to show.

    cd experimentdocker-compose up

    This will launch Alloy4Fun at http://localhost:3000. The tasks are accessed through permalinks assigned to each participant. The experiment allows for up to 104 participants, and the list of available identifiers is given in file identifiers.txt. The group of each participant is determined by the last character of the identifier, either N, L, E or D. The task database can be consulted in directory data/experiment, in Alloy4Fun JSON files.

    In the 1st session, each participant was given one permalink that gives access to 12 sequential tasks. The permalink is simply the participant's identifier, so participant 0CAN would just access http://localhost:3000/0CAN. The next task is available after a correct submission to the current task or when a time-out occurs (5mins). Each participant was assigned to a different treatment group, so depending on the permalink different kinds of hints are provided. Below are 4 permalinks, each for each hint group:

    Group N (no hints): http://localhost:3000/0CAN

    Group L (error locations): http://localhost:3000/CA0L

    Group E (counter-example): http://localhost:3000/350E

    Group D (error description): http://localhost:3000/27AD

    In the 2nd session, likewise the 1st session, each permalink gave access to 12 sequential tasks, and the next task is available after a correct submission or a time-out (5mins). The permalink is constructed by prepending the participant's identifier with P-. So participant 0CAN would just access http://localhost:3000/P-0CAN. In the 2nd sessions all participants were expected to solve the tasks without any hints provided, so the permalinks from different groups are undifferentiated.

    Before the 1st session the participants should answer the socio-demographic questionnaire, that should ask the following information: unique identifier, age, sex, familiarity with the Alloy language, and average academic grade.

    Before and after both sessions the participants should answer the standard PrEmo 2 questionnaire. PrEmo 2 is published under an Attribution-NonCommercial-NoDerivatives 4.0 International Creative Commons licence (CC BY-NC-ND 4.0). This means that you are free to use the tool for non-commercial purposes as long as you give appropriate credit, provide a link to the license, and do not modify the original material. The original material, namely the depictions of the diferent emotions, can be downloaded from https://diopd.org/premo/. The questionnaire should ask for the unique user identifier, and for the attachment with each of the depicted 14 emotions, expressed in a 5-point Likert scale.

    After both sessions the participants should also answer the standard UMUX questionnaire. This questionnaire can be used freely, and should ask for the user unique identifier and answers for the standard 4 questions in a 7-point Likert scale. For information about the questions, how to implement the questionnaire, and how to compute the usability metric ranging from 0 to 100 score from the answers, please see the original paper:

    Kraig Finstad. 2010. The usability metric for user experience. Interacting with computers 22, 5 (2010), 323–327.

    Analysis of other applications of the experiment

    This section describes how to replicate the analysis of the data collected in an application of the experiment described in Experiment replication.

    The analysis script expects data in 4 CSV files,

  2. d

    Replication Data for: Revisiting 'The Rise and Decline' in a Population of...

    • search.dataone.org
    • dataverse.harvard.edu
    Updated Nov 22, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    TeBlunthuis, Nathan; Aaron Shaw; Benjamin Mako Hill (2023). Replication Data for: Revisiting 'The Rise and Decline' in a Population of Peer Production Projects [Dataset]. http://doi.org/10.7910/DVN/SG3LP1
    Explore at:
    Dataset updated
    Nov 22, 2023
    Dataset provided by
    Harvard Dataverse
    Authors
    TeBlunthuis, Nathan; Aaron Shaw; Benjamin Mako Hill
    Description

    This archive contains code and data for reproducing the analysis for “Replication Data for Revisiting ‘The Rise and Decline’ in a Population of Peer Production Projects”. Depending on what you hope to do with the data you probabbly do not want to download all of the files. Depending on your computation resources you may not be able to run all stages of the analysis. The code for all stages of the analysis, including typesetting the manuscript and running the analysis, is in code.tar. If you only want to run the final analysis or to play with datasets used in the analysis of the paper, you want intermediate_data.7z or the uncompressed tab and csv files. The data files are created in a four-stage process. The first stage uses the program “wikiq” to parse mediawiki xml dumps and create tsv files that have edit data for each wiki. The second stage generates all.edits.RDS file which combines these tsvs into a dataset of edits from all the wikis. This file is expensive to generate and at 1.5GB is pretty big. The third stage builds smaller intermediate files that contain the analytical variables from these tsv files. The fourth stage uses the intermediate files to generate smaller RDS files that contain the results. Finally, knitr and latex typeset the manuscript. A stage will only run if the outputs from the previous stages do not exist. So if the intermediate files exist they will not be regenerated. Only the final analysis will run. The exception is that stage 4, fitting models and generating plots, always runs. If you only want to replicate from the second stage onward, you want wikiq_tsvs.7z. If you want to replicate everything, you want wikia_mediawiki_xml_dumps.7z.001 wikia_mediawiki_xml_dumps.7z.002, and wikia_mediawiki_xml_dumps.7z.003. These instructions work backwards from building the manuscript using knitr, loading the datasets, running the analysis, to building the intermediate datasets. Building the manuscript using knitr This requires working latex, latexmk, and knitr installations. Depending on your operating system you might install these packages in different ways. On Debian Linux you can run apt install r-cran-knitr latexmk texlive-latex-extra. Alternatively, you can upload the necessary files to a project on Overleaf.com. Download code.tar. This has everything you need to typeset the manuscript. Unpack the tar archive. On a unix system this can be done by running tar xf code.tar. Navigate to code/paper_source. Install R dependencies. In R. run install.packages(c("data.table","scales","ggplot2","lubridate","texreg")) On a unix system you should be able to run make to build the manuscript generalizable_wiki.pdf. Otherwise you should try uploading all of the files (including the tables, figure, and knitr folders) to a new project on Overleaf.com. Loading intermediate datasets The intermediate datasets are found in the intermediate_data.7z archive. They can be extracted on a unix system using the command 7z x intermediate_data.7z. The files are 95MB uncompressed. These are RDS (R data set) files and can be loaded in R using the readRDS. For example newcomer.ds <- readRDS("newcomers.RDS"). If you wish to work with these datasets using a tool other than R, you might prefer to work with the .tab files. Running the analysis Fitting the models may not work on machines with less than 32GB of RAM. If you have trouble, you may find the functions in lib-01-sample-datasets.R useful to create stratified samples of data for fitting models. See line 89 of 02_model_newcomer_survival.R for an example. Download code.tar and intermediate_data.7z to your working folder and extract both archives. On a unix system this can be done with the command tar xf code.tar && 7z x intermediate_data.7z. Install R dependencies. install.packages(c("data.table","ggplot2","urltools","texreg","optimx","lme4","bootstrap","scales","effects","lubridate","devtools","roxygen2")). On a unix system you can simply run regen.all.sh to fit the models, build the plots and create the RDS files. Generating datasets Building the intermediate files The intermediate files are generated from all.edits.RDS. This process requires about 20GB of memory. Download all.edits.RDS, userroles_data.7z,selected.wikis.csv, and code.tar. Unpack code.tar and userroles_data.7z. On a unix system this can be done using tar xf code.tar && 7z x userroles_data.7z. Install R dependencies. In R run install.packages(c("data.table","ggplot2","urltools","texreg","optimx","lme4","bootstrap","scales","effects","lubridate","devtools","roxygen2")). Run 01_build_datasets.R. Building all.edits.RDS The intermediate RDS files used in the analysis are created from all.edits.RDS. To replicate building all.edits.RDS, you only need to run 01_build_datasets.R when the int... Visit https://dataone.org/datasets/sha256%3Acfa4980c107154267d8eb6dc0753ed0fde655a73a062c0c2f5af33f237da3437 for complete metadata about this dataset.

  3. m

    Inflation- Unemployment Data & Analysis Codes (R)

    • data.mendeley.com
    Updated Sep 11, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hazar Altinbas (2018). Inflation- Unemployment Data & Analysis Codes (R) [Dataset]. http://doi.org/10.17632/v9679528f7.1
    Explore at:
    Dataset updated
    Sep 11, 2018
    Authors
    Hazar Altinbas
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This data is used for examination of inflation- unemployment relationship for 18 countries after 1991. Inflation data is obtained from World Bank database (https://data.worldbank.org/indicator/FP.CPI.TOTL.ZG) and unemployment data is obtained from International Labor Organization (http://www.ilo.org/wesodata/).

    Analysis period is different for all countries because of structural breaks determined by single point change point detection algorithm included in changepoint package of Killick & Eckley (2014). Granger-causality is conducted with Toda&Yamamoto (1995) procedure. Integration levels are determined with 3 stationary tests. VAR models are run with vars package (Pfaff, Stigler & Pfaff; 2018) without trend and constant terms. Cointegration test is conducted with urca package (Pfaff, Zivot, Stigler & Pfaff; 2016).

    All data files are .csv files. Analyst need to change country index (variable name: j) in order to see individual results. Findings can be seen in the article.

    Killick, R., & Eckley, I. (2014). changepoint: An R package for changepoint analysis. Journal of statistical software, 58(3), 1-19.

    Pfaff, B., Stigler, M., & Pfaff, M. B. (2018). Package ‘vars’. Online] https://cran. r-project. org/web/packages/vars/vars. pdf.

    Pfaff, B., Zivot, E., Stigler, M., & Pfaff, M. B. (2016). Package ‘urca’. Unit root and cointegration tests for time series data. R package version, 1-2.

    Toda, H. Y., & Yamamoto, T. (1995). Statistical inference in vector autoregressions with possibly integrated processes. Journal of econometrics, 66(1-2), 225-250.

  4. There are five documents included. The first PDF document contains the data...

    • figshare.com
    pdf
    Updated Jun 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hengzhi Hu (2025). There are five documents included. The first PDF document contains the data analysis scripts and the analysis statement from the educational authority that regulates the research. Each page in this document is stamped for verification purposes. The second TXT document contains the data analysis scripts obtained from R program. The script is exactly the same as the one included in the first document. The third PDF document contains the official statement on the data access and confidentiality, with official stamp for verification. Due to ethical considerations and the regulation of local educational policies, the raw dataset for the study cannot be shared but can be accessed on-site with the educational authority. The last two documents are the speaking tasks used in the study, including a text presentation of task instructions and a test instruction recording. [Dataset]. http://doi.org/10.6084/m9.figshare.29390120.v1
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jun 24, 2025
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Hengzhi Hu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The data analysis transcript is specifically for the study titled "What Shapes Automated Ratings in Computer-Based English Speaking Tests? Perspectives from Analytic Complexity, Accuracy, Fluency, and Pronunciation Indices". This study adopts a quantitative, correlational research design to investigate the extent to which various linguistic features—namely Complexity, Accuracy, Fluency, and Pronunciation (CAFP)—predict Automated Ratings (ARs) in China’s Computer-Based English Speaking Test (CBEST) administered during Zhongkao. The aim is to uncover how these linguistic indices influence machine-generated scores and to evaluate the validity and fairness of automated assessment systems in high-stakes educational contexts.The CBEST format used in this study includes three task types: Reading-Aloud, Communicative Question & Answer, and Response to a Topic. These tasks are scored using an integrated system developed by iFlytek, which combines automatic speech recognition (ASR), deep learning models, and benchmarked manual expert evaluation. The assessment model has been officially recognized and is widely adopted in Chinese provinces for junior secondary school students.

  5. e

    Online survey data for the 2017 Aesthetic value project (NESP TWQ 3.2.3,...

    • catalogue.eatlas.org.au
    Updated Nov 22, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Australian Institute of Marine Science (AIMS) (2019). Online survey data for the 2017 Aesthetic value project (NESP TWQ 3.2.3, Griffith Institute for Tourism Research) [Dataset]. https://catalogue.eatlas.org.au/geonetwork/srv/api/records/595f79c7-b553-4aab-9ad8-42c092508f81
    Explore at:
    www:link-1.0-http--downloaddata, www:link-1.0-http--relatedAvailable download formats
    Dataset updated
    Nov 22, 2019
    Dataset provided by
    Australian Institute of Marine Science (AIMS)
    Time period covered
    Jan 28, 2017 - Jan 28, 2018
    Description

    This dataset consists of three data folders including all related documents of the online survey conducted within the NESP 3.2.3 project (Tropical Water Quality Hub) and a survey format document representing how the survey was designed. Apart from participants’ demographic information, the survey consists of three sections: conjoint analysis, picture rating and open question. Correspondent outcome of these three sections are downloaded from Qualtrics website and used for three different data analysis processes.

    Related data to the first section “conjoint analysis” is saved in the Conjoint analysis folder which contains two sub-folders. The first one includes a plan file of SAV. Format representing the design suggestion by SPSS orthogonal analysis for testing beauty factors and 9 photoshoped pictures used in the survey. The second (i.e. Final results) contains 1 SAV. file named “data1” which is the imported results of conjoint analysis section in SPSS, 1 SPS. file named “Syntax1” representing the code used to run conjoint analysis, 2 SAV. files as the output of conjoint analysis by SPSS, and 1 SPV file named “Final output” showing results of further data analysis by SPSS on the basis of utility and importance data.

    Related data to the second section “Picture rating” is saved into Picture rating folder including two subfolders. One subfolder contains 2500 pictures of Great Barrier Reef used in the rating survey section. These pictures are organised by named and stored in two folders named as “Survey Part 1” and “Survey Part 2” which are correspondent with two parts of the rating survey sections. The other subfolder “Rating results” consist of one XLSX. file representing survey results downloaded from Qualtric website.

    Finally, related data to the open question is saved in “Open question” folder. It contains one csv. file and one PDF. file recording participants’ answers to the open question as well as one PNG. file representing a screenshot of Leximancer analysis outcome.

    Methods: This dataset resulted from the input and output of an online survey regarding how people assess the beauty of Great Barrier Reef. This survey was designed for multiple purposes including three main sections: (1) conjoint analysis (ranking 9 photoshopped pictures to determine the relative importance weights of beauty attributes), (2) picture rating (2500 pictures to be rated) and (3) open question on the factors that makes a picture of the Great Barrier Reef beautiful in participants’ opinion (determining beauty factors from tourist perspective). Pictures used in this survey were downloaded from public sources such as websites of the Tourism and Events Queensland and Tropical Tourism North Queensland as well as tourist sharing sources (i.e. Flickr). Flickr pictures were downloaded using the key words “Great Barrier Reef”. About 10,000 pictures were downloaded in August and September 2017. 2,500 pictures were then selected based on several research criteria: (1) underwater pictures of GBR, (2) without humans, (3) viewed from 1-2 metres from objects and (4) of high resolution.

    The survey was created on Qualtrics website and launched on 4th October 2017 using Qualtrics survey service. Each participant rated 50 pictures randomly selected from the pool of 2500 survey pictures. 772 survey completions were recorded and 705 questionnaires were eligible for data analysis after filtering unqualified questionnaires. Conjoint analysis data was imported to IBM SPSS using SAV. format and the output was saved using SPV. format. Automatic aesthetic rating of 2500 Great Barrier Reef pictures –all these pictures are rated (1 – 10 scale) by at least 10 participants and this dataset was saved in a XLSX. file which is used to train and test an Artificial Intelligence (AI)-based system recognising and assessing the beauty of natural scenes. Answers of the open-question were saved in a XLSX. file and a PDF. file to be employed for theme analysis by Leximancer software.

    Further information can be found in the following publication: Becken, S., Connolly R., Stantic B., Scott N., Mandal R., Le D., (2018), Monitoring aesthetic value of the Great Barrier Reef by using innovative technologies and artificial intelligence, Griffith Institute for Tourism Research Report No 15.

    Format: The Online survey dataset includes one PDF file representing the survey format with all sections and questions. It also contains three subfolders, each has multiple files. The subfolder of Conjoint analysis contains an image of the 9 JPG. Pictures, 1 SAV. format file for the Orthoplan subroutine outcome and 5 outcome documents (i.e. 3 SAV. files, 1 SPS. file, 1 SPV. file). The subfolder of Picture rating contains a capture of the 2500 pictures used in the survey, 1 excel file for rating results. The subfolder of Open question includes 1 CSV. file, 1 PDF. file representing participants’ answers and one PNG. file for the analysis outcome.

    Data Dictionary:

    Card 1: Picture design option number 1 suggested by SPSS orthogonal analysis. Importance value: The relative importance weight of each beauty attribute calculated by SPSS conjoint analysis. Utility: Score reflecting influential valence and degree of each beauty attribute on beauty score. Syntax: Code used to run conjoint analysis by SPSS Leximancer: Specialised software for qualitative data analysis. Concept map: A map showing the relationship between concepts identified Q1_1: Beauty score of the picture Q1_1 by the correspondent participant (i.e. survey part 1) Q2.1_1: Beauty score of the picture Q2.1_1 by the correspondent participant (i.e. survey part 2) Conjoint _1: Ranking of the picture 1 designed for conjoint analysis by the correspondent participant

    References: Becken, S., Connolly R., Stantic B., Scott N., Mandal R., Le D., (2018), Monitoring aesthetic value of the Great Barrier Reef by using innovative technologies and artificial intelligence, Griffith Institute for Tourism Research Report No 15.

    Data Location:

    This dataset is filed in the eAtlas enduring data repository at: data esp3\3.2.3_Aesthetic-value-GBR

  6. H

    TempEst 2 Development Data: Observed Stream Temperature, Covariates,...

    • hydroshare.org
    zip
    Updated Feb 4, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Daniel Philippus; Claudia R Corona; Katie Schneider; Ashley Rust; Terri S Hogue (2025). TempEst 2 Development Data: Observed Stream Temperature, Covariates, Performance Data, and Analysis Notebooks [Dataset]. http://doi.org/10.4211/hs.a8b243957f7946e388d10ab206990675
    Explore at:
    zip(1.7 GB)Available download formats
    Dataset updated
    Feb 4, 2025
    Dataset provided by
    HydroShare
    Authors
    Daniel Philippus; Claudia R Corona; Katie Schneider; Ashley Rust; Terri S Hogue
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jan 2, 2001 - Dec 31, 2022
    Area covered
    Description

    This resource contains code and data related to the development of the TempEst 2 stream temperature remote sensing model (manuscript in review with Journal of Hydrology). The code includes the model implementation (model.R), some utility functions (valfn.R), data retrieval scripts for Google Earth Engine (eeretrieval.py and datapts.py), and a reproducible validation notebook (validation.Rmd), along with the knitted PDF of the latter (validation.pdf). The main data include stream temperature daily mean/max observations retrieved from the USGS NWIS as well as remotely-sensed and gridded observations retrieved using Google Earth Engine from NLDAS, ESA WorldCover, MODIS, ERA5, and EPA Ecoregions (using eeretrieval.py). These are contained in three files. AllData.csv includes all observations for mean temperature. ExtData.csv ("extended data") adds maximum temperature, at the expense of fewer total observations being included. Ecoregions.csv is not central to the analysis, but includes EPA Level I ecoregion classifications for convenience.

    Model performance tests can be reproduced using validation.Rmd. To run validation.Rmd in full, there must be a Data directory with subdirectories Density and TSLen, a Figures directory, at least one of the main data files (AllData.csv, ExtData.csv) or equivalent, and Ecoregions.csv. A knitted version of the Notebook is included in this resource. The error map plots also use an EPA Level I Ecoregions (https://gaftp.epa.gov/EPADataCommons/ORD/Ecoregions/cec_na/na_cec_eco_l1.zip) shapefile, which is assumed to be in an Ecoregions subdirectory of the parent directory. This dependency can be removed by replacing the plot.eco function with ordinary ggplot plotting.

    The two rda (RData) files contain different versions of a pre-trained model. model.rda contains a regular, pre-trained model function that can be used directly to generate predictions. krigs.rda contains a list of the actual fitted kriging models, which can be used for investigating model components (see demo.pdf).

    These data and related items of information have not been formally disseminated by NOAA, and do not represent any agency determination, view, or policy.

  7. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Macedo, Nuno; Cunha, Alcino; Campos, José Creissac; Sousa, Emanuel; Margolis, Iara (2024). Assessing the impact of hints in learning formal specification: Research artifact [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_10450608

Assessing the impact of hints in learning formal specification: Research artifact

Explore at:
Dataset updated
Jan 29, 2024
Dataset provided by
Centro de Computação Gráfica
INESC TEC
Authors
Macedo, Nuno; Cunha, Alcino; Campos, José Creissac; Sousa, Emanuel; Margolis, Iara
License

MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically

Description

This artifact accompanies the SEET@ICSE article "Assessing the impact of hints in learning formal specification", which reports on a user study to investigate the impact of different types of automated hints while learning a formal specification language, both in terms of immediate performance and learning retention, but also in the emotional response of the students. This research artifact provides all the material required to replicate this study (except for the proprietary questionnaires passed to assess the emotional response and user experience), as well as the collected data and data analysis scripts used for the discussion in the paper.

Dataset

The artifact contains the resources described below.

Experiment resources

The resources needed for replicating the experiment, namely in directory experiment:

alloy_sheet_pt.pdf: the 1-page Alloy sheet that participants had access to during the 2 sessions of the experiment. The sheet was passed in Portuguese due to the population of the experiment.

alloy_sheet_en.pdf: a version the 1-page Alloy sheet that participants had access to during the 2 sessions of the experiment translated into English.

docker-compose.yml: a Docker Compose configuration file to launch Alloy4Fun populated with the tasks in directory data/experiment for the 2 sessions of the experiment.

api and meteor: directories with source files for building and launching the Alloy4Fun platform for the study.

Experiment data

The task database used in our application of the experiment, namely in directory data/experiment:

Model.json, Instance.json, and Link.json: JSON files with to populate Alloy4Fun with the tasks for the 2 sessions of the experiment.

identifiers.txt: the list of all (104) available participant identifiers that can participate in the experiment.

Collected data

Data collected in the application of the experiment as a simple one-factor randomised experiment in 2 sessions involving 85 undergraduate students majoring in CSE. The experiment was validated by the Ethics Committee for Research in Social and Human Sciences of the Ethics Council of the University of Minho, where the experiment took place. Data is shared the shape of JSON and CSV files with a header row, namely in directory data/results:

data_sessions.json: data collected from task-solving in the 2 sessions of the experiment, used to calculate variables productivity (PROD1 and PROD2, between 0 and 12 solved tasks) and efficiency (EFF1 and EFF2, between 0 and 1).

data_socio.csv: data collected from socio-demographic questionnaire in the 1st session of the experiment, namely:

participant identification: participant's unique identifier (ID);

socio-demographic information: participant's age (AGE), sex (SEX, 1 through 4 for female, male, prefer not to disclosure, and other, respectively), and average academic grade (GRADE, from 0 to 20, NA denotes preference to not disclosure).

data_emo.csv: detailed data collected from the emotional questionnaire in the 2 sessions of the experiment, namely:

participant identification: participant's unique identifier (ID) and the assigned treatment (column HINT, either N, L, E or D);

detailed emotional response data: the differential in the 5-point Likert scale for each of the 14 measured emotions in the 2 sessions, ranging from -5 to -1 if decreased, 0 if maintained, from 1 to 5 if increased, or NA denoting failure to submit the questionnaire. Half of the emotions are positive (Admiration1 and Admiration2, Desire1 and Desire2, Hope1 and Hope2, Fascination1 and Fascination2, Joy1 and Joy2, Satisfaction1 and Satisfaction2, and Pride1 and Pride2), and half are negative (Anger1 and Anger2, Boredom1 and Boredom2, Contempt1 and Contempt2, Disgust1 and Disgust2, Fear1 and Fear2, Sadness1 and Sadness2, and Shame1 and Shame2). This detailed data was used to compute the aggregate data in data_emo_aggregate.csv and in the detailed discussion in Section 6 of the paper.

data_umux.csv: data collected from the user experience questionnaires in the 2 sessions of the experiment, namely:

participant identification: participant's unique identifier (ID);

user experience data: summarised user experience data from the UMUX surveys (UMUX1 and UMUX2, as a usability metric ranging from 0 to 100).

participants.txt: the list of participant identifiers that have registered for the experiment.

Analysis scripts

The analysis scripts required to replicate the analysis of the results of the experiment as reported in the paper, namely in directory analysis:

analysis.r: An R script to analyse the data in the provided CSV files; each performed analysis is documented within the file itself.

requirements.r: An R script to install the required libraries for the analysis script.

normalize_task.r: A Python script to normalize the task JSON data from file data_sessions.json into the CSV format required by the analysis script.

normalize_emo.r: A Python script to compute the aggregate emotional response in the CSV format required by the analysis script from the detailed emotional response data in the CSV format of data_emo.csv.

Dockerfile: Docker script to automate the analysis script from the collected data.

Setup

To replicate the experiment and the analysis of the results, only Docker is required.

If you wish to manually replicate the experiment and collect your own data, you'll need to install:

A modified version of the Alloy4Fun platform, which is built in the Meteor web framework. This version of Alloy4Fun is publicly available in branch study of its repository at https://github.com/haslab/Alloy4Fun/tree/study.

If you wish to manually replicate the analysis of the data collected in our experiment, you'll need to install:

Python to manipulate the JSON data collected in the experiment. Python is freely available for download at https://www.python.org/downloads/, with distributions for most platforms.

R software for the analysis scripts. R is freely available for download at https://cran.r-project.org/mirrors.html, with binary distributions available for Windows, Linux and Mac.

Usage

Experiment replication

This section describes how to replicate our user study experiment, and collect data about how different hints impact the performance of participants.

To launch the Alloy4Fun platform populated with tasks for each session, just run the following commands from the root directory of the artifact. The Meteor server may take a few minutes to launch, wait for the "Started your app" message to show.

cd experimentdocker-compose up

This will launch Alloy4Fun at http://localhost:3000. The tasks are accessed through permalinks assigned to each participant. The experiment allows for up to 104 participants, and the list of available identifiers is given in file identifiers.txt. The group of each participant is determined by the last character of the identifier, either N, L, E or D. The task database can be consulted in directory data/experiment, in Alloy4Fun JSON files.

In the 1st session, each participant was given one permalink that gives access to 12 sequential tasks. The permalink is simply the participant's identifier, so participant 0CAN would just access http://localhost:3000/0CAN. The next task is available after a correct submission to the current task or when a time-out occurs (5mins). Each participant was assigned to a different treatment group, so depending on the permalink different kinds of hints are provided. Below are 4 permalinks, each for each hint group:

Group N (no hints): http://localhost:3000/0CAN

Group L (error locations): http://localhost:3000/CA0L

Group E (counter-example): http://localhost:3000/350E

Group D (error description): http://localhost:3000/27AD

In the 2nd session, likewise the 1st session, each permalink gave access to 12 sequential tasks, and the next task is available after a correct submission or a time-out (5mins). The permalink is constructed by prepending the participant's identifier with P-. So participant 0CAN would just access http://localhost:3000/P-0CAN. In the 2nd sessions all participants were expected to solve the tasks without any hints provided, so the permalinks from different groups are undifferentiated.

Before the 1st session the participants should answer the socio-demographic questionnaire, that should ask the following information: unique identifier, age, sex, familiarity with the Alloy language, and average academic grade.

Before and after both sessions the participants should answer the standard PrEmo 2 questionnaire. PrEmo 2 is published under an Attribution-NonCommercial-NoDerivatives 4.0 International Creative Commons licence (CC BY-NC-ND 4.0). This means that you are free to use the tool for non-commercial purposes as long as you give appropriate credit, provide a link to the license, and do not modify the original material. The original material, namely the depictions of the diferent emotions, can be downloaded from https://diopd.org/premo/. The questionnaire should ask for the unique user identifier, and for the attachment with each of the depicted 14 emotions, expressed in a 5-point Likert scale.

After both sessions the participants should also answer the standard UMUX questionnaire. This questionnaire can be used freely, and should ask for the user unique identifier and answers for the standard 4 questions in a 7-point Likert scale. For information about the questions, how to implement the questionnaire, and how to compute the usability metric ranging from 0 to 100 score from the answers, please see the original paper:

Kraig Finstad. 2010. The usability metric for user experience. Interacting with computers 22, 5 (2010), 323–327.

Analysis of other applications of the experiment

This section describes how to replicate the analysis of the data collected in an application of the experiment described in Experiment replication.

The analysis script expects data in 4 CSV files,

Search
Clear search
Close search
Google apps
Main menu