Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
These are the STATA data sheets imported from excel. These are used directly for meta-analysis
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Firm-level data from 2009 to 2018 of 34 large gold mines in Developing countries. The data is used to compute the deterministic, dynamic environmental and technical efficiencies of large gold mines in developing countries. Steps to reproduce1. Run the R command to generate dynamic technical and dynamic inefficiencies per every two subsequent period (i.e period t and t+1)2. combine the results files of inefficiencies per period generated in R into a panel (see the Excel files in the results folder)3. Import the excel folder into Stata and generate the final results indicated in the paper.
Facebook
TwitterSheets from the excel file are imported into the STATA software one sheet at a time.
Facebook
TwitterWe imported the excel sheet FGD patients’ characteristics into the Stata software for conducting simple descriptive analysis. Therefore, a saved dataset and its do file has been shared with editors and reviewers for their reference. (ZIP)
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains data from an online experiment designed to test whether economically equivalent penalties—fees (paid before taking) and fines (paid after taking)—influence prosocial behaviour differently. Participants played a modified dictator game in which they could take points from another participant.
The dataset is provided in Excel format (Full-data.xlsx), along with a Stata do-file (submit.do) that reshapes, cleans, and analyses the data.
Platform: oTree
Recruitment: Prolific
Sample size: 201 participants
Design: Each participant played 20 rounds: 10 in the control condition and 10 in one treatment condition (fee or fine). Order of blocks was randomised.
Payment: 200 points = £1. One round was randomly selected for payment.
session – Session number
id – Participant ID
treatment – Assigned treatment (1 = Fee, 2 = Fine)
order – Order of blocks (0 = Control first, 1 = Treatment first)
For each round, participants made decisions in both control (c) and treatment (t) conditions.
c1, t1, c2, t2, … – Tokens available and/or allocated across control and treatment rounds.
takeX – Amount taken from the other participant in case X.
Social norms were elicited after the taking task. Variables include empirical, normative, and responsibility measures at both extensive and intensive margins:
eyX, etX – Empirical expectations (beliefs about what others do)
nyX, ntX – Normative expectations (beliefs about what others think is appropriate)
ryX, rtX – Responsibility measures
casenormX – Case identifier for norm elicitation
From survey responses:
Sex – Gender
Ethnicitysimplified – Simplified ethnicity category
Countryofresidence – Participant’s country of residence
order, session – Experimental setup metadata
analysis.do)The .do file performs the following steps:
Data Preparation
Import raw Excel file
Reshape from wide to long format (cases per participant)
Declare panel data (xtset id)
Variable Generation
Rename variables for clarity (e.g., take for amount taken)
Generate treatment dummies (treat)
Construct demographic dummies (gender, race, nationality)
Analysis Preparation
Create extensive and intensive margin variables
Generate expectation and norm measures
Output
Ready-to-analyse panel dataset for regression and statistical analysis
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The GAPs Data Repository provides a comprehensive overview of available qualitative and quantitative data on national return regimes, now accessible through an advanced web interface at https://data.returnmigration.eu/. This updated guideline outlines the complete process, starting from the initial data collection for the return migration data repository to the development of a comprehensive web-based platform. Through iterative development, participatory approaches, and rigorous quality checks, we have ensured a systematic representation of return migration data at both national and comparative levels. The Repository organizes data into five main categories, covering diverse aspects and offering a holistic view of return regimes: country profiles, legislation, infrastructure, international cooperation, and descriptive statistics. These categories, further divided into subcategories, are based on insights from a literature review, existing datasets, and empirical data collection from 14 countries. The selection of categories prioritizes relevance for understanding return and readmission policies and practices, data accessibility, reliability, clarity, and comparability. Raw data is meticulously collected by the national experts. The transition to a web-based interface builds upon the Repository’s original structure, which was initially developed using REDCap (Research Electronic Data Capture). It is a secure web application for building and managing online surveys and databases.The REDCAP ensures systematic data entries and store them on Uppsala University’s servers while significantly improving accessibility and usability as well as data security. It also enables users to export any or all data from the Project when granted full data export privileges. Data can be exported in various ways and formats, including Microsoft Excel, SAS, Stata, R, or SPSS for analysis. At this stage, the Data Repository design team also converted tailored records of available data into public reports accessible to anyone with a unique URL, without the need to log in to REDCap or obtain permission to access the GAPs Project Data Repository. Public reports can be used to share information with stakeholders or external partners without granting them access to the Project or requiring them to set up a personal account. Currently, all public report links inserted in this report are also available on the Repository’s webpage, allowing users to export original data. This report also includes a detailed codebook to help users understand the structure, variables, and methodologies used in data collection and organization. This addition ensures transparency and provides a comprehensive framework for researchers and practitioners to effectively interpret the data. The GAPs Data Repository is committed to providing accessible, well-organized, and reliable data by moving to a centralized web platform and incorporating advanced visuals. This Repository aims to contribute inputs for research, policy analysis, and evidence-based decision-making in the return and readmission field. Explore the GAPs Data Repository at https://data.returnmigration.eu/.
Facebook
TwitterThis is the dataset for Kalis et al (forthcoming). It is in long format, with three rows for each participant. Repeated measures are indicated in the "gesture" variable. Between subjects attention condition is indicated in the AttCond variable. Reyimmediate and Reydelay are the RCFT immediate and delayed scores for each participant. Dependent variables are GestCorrect, MisGestMisled and TotalCorrect.
Data were collected from 94 participants as part of an Honours project in 2022. Qualtrics was used to collect demographic data. Reyimmediate and Reydelay data were collected on paper hard copies, and then manually coded and entered electronically to Stata. GestCorrect, MisGestMisled and TotalCorrect were verbal recall scores that were collected through individual interviews with participants which were audio and video recorded.
Recordings were then coded in ELAN (https://archive.mpi.nl/tla/elan). Codes were exported as .csv files, collated in Microsoft Excel and imported to Stata for analysis.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
We used the standard and validated Pittsburgh Sleep Quality Index (PSQI), which was developed by researchers at the University of Pittsburgh in 1988 AD. The questionnaire included baseline variables like age, sex, academic year, and questions addressing participants’ sleep habits and quality i.e. PSQI. The PSQI assesses the sleep quality during the previous month and contains 19 self-rated questions that yield seven components: subjective sleep quality sleep, latency, sleep duration, sleep efficiency and sleep disturbance, and daytime dysfunction. Each component is to be assigned a scored that ranges from zero to three, yielding a PSQI score in a range that goes from 0 to 21. A total score of 0 to 4 is considered as normal sleep quality; whereas, scores greater than 4 are categorized as poor sleep quality.Data collected from students through the Google forms were extracted to Google sheets, cleaned in Excel, and then imported and analyzed using STATA 15. Simple descriptive analysis was performed to see the response for every PSQI variable. Then calculation performed following PSQI form administration instructions.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
These are the STATA data sheets imported from excel. These are used directly for meta-analysis