Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
A SPSS file with data used in the statistical analysis. Covariates were excluded in the file due to restrictions of the ethical permission. However a complete file is provided for researchers after request at publication@ventorp.com. (SAV)
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
GENERAL INFORMATION
Title of Dataset: A dataset from a survey investigating disciplinary differences in data citation
Date of data collection: January to March 2022
Collection instrument: SurveyMonkey
Funding: Alfred P. Sloan Foundation
SHARING/ACCESS INFORMATION
Licenses/restrictions placed on the data: These data are available under a CC BY 4.0 license
Links to publications that cite or use the data:
Gregory, K., Ninkov, A., Ripp, C., Peters, I., & Haustein, S. (2022). Surveying practices of data citation and reuse across disciplines. Proceedings of the 26th International Conference on Science and Technology Indicators. International Conference on Science and Technology Indicators, Granada, Spain. https://doi.org/10.5281/ZENODO.6951437
Gregory, K., Ninkov, A., Ripp, C., Roblin, E., Peters, I., & Haustein, S. (2023). Tracing data: A survey investigating disciplinary differences in data citation. Zenodo. https://doi.org/10.5281/zenodo.7555266
DATA & FILE OVERVIEW
File List
Filename: MDCDatacitationReuse2021Codebookv2.pdf Codebook
Filename: MDCDataCitationReuse2021surveydatav2.csv Dataset format in csv
Filename: MDCDataCitationReuse2021surveydatav2.sav Dataset format in SPSS
Filename: MDCDataCitationReuseSurvey2021QNR.pdf Questionnaire
Additional related data collected that was not included in the current data package: Open ended questions asked to respondents
METHODOLOGICAL INFORMATION
Description of methods used for collection/generation of data:
The development of the questionnaire (Gregory et al., 2022) was centered around the creation of two main branches of questions for the primary groups of interest in our study: researchers that reuse data (33 questions in total) and researchers that do not reuse data (16 questions in total). The population of interest for this survey consists of researchers from all disciplines and countries, sampled from the corresponding authors of papers indexed in the Web of Science (WoS) between 2016 and 2020.
Received 3,632 responses, 2,509 of which were completed, representing a completion rate of 68.6%. Incomplete responses were excluded from the dataset. The final total contains 2,492 complete responses and an uncorrected response rate of 1.57%. Controlling for invalid emails, bounced emails and opt-outs (n=5,201) produced a response rate of 1.62%, similar to surveys using comparable recruitment methods (Gregory et al., 2020).
Methods for processing the data:
Results were downloaded from SurveyMonkey in CSV format and were prepared for analysis using Excel and SPSS by recoding ordinal and multiple choice questions and by removing missing values.
Instrument- or software-specific information needed to interpret the data:
The dataset is provided in SPSS format, which requires IBM SPSS Statistics. The dataset is also available in a coded format in CSV. The Codebook is required to interpret to values.
DATA-SPECIFIC INFORMATION FOR: MDCDataCitationReuse2021surveydata
Number of variables: 95
Number of cases/rows: 2,492
Missing data codes: 999 Not asked
Refer to MDCDatacitationReuse2021Codebook.pdf for detailed variable information.
General information: The data sets contain information on how often materials of studies available through GESIS: Data Archive for the Social Sciences were downloaded and/or ordered through one of the archive´s plattforms/services between 2004 and 2017.
Sources and plattforms: Study materials are accessible through various GESIS plattforms and services: Data Catalogue (DBK), histat, datorium, data service (and others).
Years available: - Data Catalogue: 2012-2017 - data service: 2006-2017 - datorium: 2014-2017 - histat: 2004-2017
Data sets: Data set ZA6899_Datasets_only_all_sources contains information on how often data files such as those with dta- (Stata) or sav- (SPSS) extension have been downloaded. Identification of data files is handled semi-automatically (depending on the plattform/serice). Multiple downloads of one file by the same user (identified through IP-address or username for registered users) on the same days are only counted as one download.
Data set ZA6899_Doc_and_Data_all_sources contains information on how often study materials have been downloaded. Multiple downloads of any file of the same study by the same user (identified through IP-address or username for registered users) on the same days are only counted as one download.
Both data sets are available in three formats: csv (quoted, semicolon-separated), dta (Stata v13, labeled) and sav (SPSS, labeled). All formats contain identical information.
Variables: Variables/columns in both data sets are identical. za_nr ´Archive study number´ version ´GESIS Archiv Version´ doi ´Digital Object Identifier´ StudyNo ´Study number of respective study´ Title ´English study title´ Title_DE ´German study title´ Access ´Access category (0, A, B, C, D, E)´ PubYear ´Publication year of last version of the study´ inZACAT ´Study is currently also available via ZACAT´ inHISTAT ´Study is currently also available via HISTAT´ inDownloads ´There are currently data files available for download for this study in DBK or datorium´ Total ´All downloads combined´ downloads_2004 ´downloads/orders from all sources combined in 2004´ [up to ...] downloads_2017 ´downloads/orders from all sources combined in 2017´ d_2004_dbk ´downloads from source dbk in 2004´ [up to ...] d_2017_dbk ´downloads from source dbk in 2017´ d_2004_histat ´downloads from source histat in 2004´ [up to ...] d_2017_histat ´downloads from source histat in 2017´ d_2004_dataservice ´downloads/orders from source dataservice in 2004´ [up to ...] d_2017_dataservice ´downloads/orders from source dataservice in 2017´
More information is available within the codebook.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Raw data file.xlsx and.sav format of raw data of the study that is available by Excel and SPSS software.
Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
This dataset consists of three data folders including all related documents of the online survey conducted within the NESP 3.2.3 project (Tropical Water Quality Hub) and a survey format document representing how the survey was designed. Apart from participants’ demographic information, the survey consists of three sections: conjoint analysis, picture rating and open question. Correspondent outcome of these three sections are downloaded from Qualtrics website and used for three different data analysis processes.
Related data to the first section “conjoint analysis” is saved in the Conjoint analysis folder which contains two sub-folders. The first one includes a plan file of SAV. Format representing the design suggestion by SPSS orthogonal analysis for testing beauty factors and 9 photoshoped pictures used in the survey. The second (i.e. Final results) contains 1 SAV. file named “data1” which is the imported results of conjoint analysis section in SPSS, 1 SPS. file named “Syntax1” representing the code used to run conjoint analysis, 2 SAV. files as the output of conjoint analysis by SPSS, and 1 SPV file named “Final output” showing results of further data analysis by SPSS on the basis of utility and importance data.
Related data to the second section “Picture rating” is saved into Picture rating folder including two subfolders. One subfolder contains 2500 pictures of Great Barrier Reef used in the rating survey section. These pictures are organised by named and stored in two folders named as “Survey Part 1” and “Survey Part 2” which are correspondent with two parts of the rating survey sections. The other subfolder “Rating results” consist of one XLSX. file representing survey results downloaded from Qualtric website.
Finally, related data to the open question is saved in “Open question” folder. It contains one csv. file and one PDF. file recording participants’ answers to the open question as well as one PNG. file representing a screenshot of Leximancer analysis outcome.
Methods: This dataset resulted from the input and output of an online survey regarding how people assess the beauty of Great Barrier Reef. This survey was designed for multiple purposes including three main sections: (1) conjoint analysis (ranking 9 photoshopped pictures to determine the relative importance weights of beauty attributes), (2) picture rating (2500 pictures to be rated) and (3) open question on the factors that makes a picture of the Great Barrier Reef beautiful in participants’ opinion (determining beauty factors from tourist perspective). Pictures used in this survey were downloaded from public sources such as websites of the Tourism and Events Queensland and Tropical Tourism North Queensland as well as tourist sharing sources (i.e. Flickr). Flickr pictures were downloaded using the key words “Great Barrier Reef”. About 10,000 pictures were downloaded in August and September 2017. 2,500 pictures were then selected based on several research criteria: (1) underwater pictures of GBR, (2) without humans, (3) viewed from 1-2 metres from objects and (4) of high resolution.
The survey was created on Qualtrics website and launched on 4th October 2017 using Qualtrics survey service. Each participant rated 50 pictures randomly selected from the pool of 2500 survey pictures. 772 survey completions were recorded and 705 questionnaires were eligible for data analysis after filtering unqualified questionnaires. Conjoint analysis data was imported to IBM SPSS using SAV. format and the output was saved using SPV. format. Automatic aesthetic rating of 2500 Great Barrier Reef pictures –all these pictures are rated (1 – 10 scale) by at least 10 participants and this dataset was saved in a XLSX. file which is used to train and test an Artificial Intelligence (AI)-based system recognising and assessing the beauty of natural scenes. Answers of the open-question were saved in a XLSX. file and a PDF. file to be employed for theme analysis by Leximancer software.
Further information can be found in the following publication: Becken, S., Connolly R., Stantic B., Scott N., Mandal R., Le D., (2018), Monitoring aesthetic value of the Great Barrier Reef by using innovative technologies and artificial intelligence, Griffith Institute for Tourism Research Report No 15.
Format: The Online survey dataset includes one PDF file representing the survey format with all sections and questions. It also contains three subfolders, each has multiple files. The subfolder of Conjoint analysis contains an image of the 9 JPG. Pictures, 1 SAV. format file for the Orthoplan subroutine outcome and 5 outcome documents (i.e. 3 SAV. files, 1 SPS. file, 1 SPV. file). The subfolder of Picture rating contains a capture of the 2500 pictures used in the survey, 1 excel file for rating results. The subfolder of Open question includes 1 CSV. file, 1 PDF. file representing participants’ answers and one PNG. file for the analysis outcome.
Data Dictionary:
Card 1: Picture design option number 1 suggested by SPSS orthogonal analysis. Importance value: The relative importance weight of each beauty attribute calculated by SPSS conjoint analysis. Utility: Score reflecting influential valence and degree of each beauty attribute on beauty score. Syntax: Code used to run conjoint analysis by SPSS Leximancer: Specialised software for qualitative data analysis. Concept map: A map showing the relationship between concepts identified Q1_1: Beauty score of the picture Q1_1 by the correspondent participant (i.e. survey part 1) Q2.1_1: Beauty score of the picture Q2.1_1 by the correspondent participant (i.e. survey part 2) Conjoint _1: Ranking of the picture 1 designed for conjoint analysis by the correspondent participant
References: Becken, S., Connolly R., Stantic B., Scott N., Mandal R., Le D., (2018), Monitoring aesthetic value of the Great Barrier Reef by using innovative technologies and artificial intelligence, Griffith Institute for Tourism Research Report No 15.
Data Location:
This dataset is filed in the eAtlas enduring data repository at: data esp3\3.2.3_Aesthetic-value-GBR
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The survey dataset for identifying Shiraz old silo’s new use which includes four components: 1. The survey instrument used to collect the data “SurveyInstrument_table.pdf”. The survey instrument contains 18 main closed-ended questions in a table format. Two of these, concern information on Silo’s decision-makers and proposed new use followed up after a short introduction of the questionnaire, and others 16 (each can identify 3 variables) are related to the level of appropriate opinions for ideal intervention in Façade, Openings, Materials and Floor heights of the building in four values: Feasibility, Reversibility, Compatibility and Social Benefits. 2. The raw survey data “SurveyData.rar”. This file contains an Excel.xlsx and a SPSS.sav file. The survey data file contains 50 variables (12 for each of the four values separated by colour) and data from each of the 632 respondents. Answering each question in the survey was mandatory, therefor there are no blanks or non-responses in the dataset. In the .sav file, all variables were assigned with numeric type and nominal measurement level. More details about each variable can be found in the Variable View tab of this file. Additional variables were created by grouping or consolidating categories within each survey question for simpler analysis. These variables are listed in the last columns of the .xlsx file. 3. The analysed survey data “AnalysedData.rar”. This file contains 6 “SPSS Statistics Output Documents” which demonstrate statistical tests and analysis such as mean, correlation, automatic linear regression, reliability, frequencies, and descriptives. 4. The codebook “Codebook.rar”. The detailed SPSS “Codebook.pdf” alongside the simplified codebook as “VariableInformation_table.pdf” provides a comprehensive guide to all 50 variables in the survey data, including numerical codes for survey questions and response options. They serve as valuable resources for understanding the dataset, presenting dictionary information, and providing descriptive statistics, such as counts and percentages for categorical variables.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Integrated Postsecondary Education Data System (IPEDS) Complete Data Files from 1980 to 2023. Includes data file, STATA data file, SPSS program, SAS program, STATA program, and dictionary. All years compressed into one .zip file due to storage limitations.From IPEDS Complete Data File Help Page (https://nces.ed.gov/Ipeds/help/complete-data-files):Choose the file to download by reading the description in the available titles. Then, click on the link in that row corresponding to the column header of the type of file/information desired to download.To download and view the survey files in basic CSV format use the main download link in the Data File column.For files compatible with the Stata statistical software package, use the alternate download link in the Stata Data File column.To download files with the SPSS, SAS, or STATA (.do) file extension for use with statistical software packages, use the download link in the Programs column.To download the data Dictionary for the selected file, click on the corresponding link in the far right column of the screen. The data dictionary serves as a reference for using and interpreting the data within a particular survey file. This includes the names, definitions, and formatting conventions for each table, field, and data element within the file, important business rules, and information on any relationships to other IPEDS data.For statistical read programs to work properly, both the data file and the corresponding read program file must be downloaded to the same subdirectory on the computer’s hard drive. Download the data file first; then click on the corresponding link in the Programs column to download the desired read program file to the same subdirectory.When viewing downloaded survey files, categorical variables are identified using codes instead of labels. Labels for these variables are available in both the data read program files and data dictionary for each file; however, for files that automatically incorporate this information you will need to select the Custom Data Files option.
This workflow aims to analyze diverse soil datasets using PCA to understand physicochemical properties. The process starts with converting SPSS (.sav) files into CSV format for better compatibility. It emphasizes variable selection, data quality improvement, standardization, and conducting PCA for data variance and pattern analysis. The workflow includes generating graphical representations like covariance and correlation matrices, scree plots, and scatter plots. These tools aid in identifying significant variables, exploring data structure, and determining optimal components for effective soil analysis. Background Understanding the intricate relationships and patterns within soil samples is crucial for various environmental and agricultural applications. Principal Component Analysis (PCA) serves as a powerful tool in unraveling the complexity of multivariate soil datasets. Soil datasets often consist of numerous variables representing diverse physicochemical properties, making PCA an invaluable method for: ∙Dimensionality Reduction: Simplifying the analysis without compromising data integrity by reducing the dimensionality of large soil datasets. ∙Identification of Dominant Patterns: Revealing dominant patterns or trends within the data, providing insights into key factors contributing to overall variability. ∙Exploration of Variable Interactions: Enabling the exploration of complex interactions between different soil attributes, enhancing understanding of their relationships. ∙Interpretability of Data Variance: Clarifying how much variance is explained by each principal component, aiding in discerning the significance of different components and variables. ∙Visualization of Data Structure: Facilitating intuitive comprehension of data structure through plots such as scatter plots of principal components, helping identify clusters, trends, and outliers. ∙Decision Support for Subsequent Analyses: Providing a foundation for subsequent analyses by guiding decision-making, whether in identifying influential variables, understanding data patterns, or selecting components for further modeling. Introduction The motivation behind this workflow is rooted in the imperative need to conduct a thorough analysis of a diverse soil dataset, characterized by an array of physicochemical variables. Comprising multiple rows, each representing distinct soil samples, the dataset encompasses variables such as percentage of coarse sands, percentage of organic matter, hydrophobicity, and others. The intricacies of this dataset demand a strategic approach to preprocessing, analysis, and visualization. This workflow centers around the exploration of soil sample variability through PCA, utilizing data formatted in SPSS (.sav) files. These files, specific to the Statistical Package for the Social Sciences (SPSS), are commonly used for data analysis. To lay the groundwork, the workflow begins with the transformation of an initial SPSS file into a CSV format, ensuring improved compatibility and ease of use throughout subsequent analyses. Incorporating PCA offers a sophisticated approach, enabling users to explore inherent patterns and structures within the data. The adaptability of PCA allows users to customize the analysis by specifying the number of components or desired variance. The workflow concludes with practical graphical representations, including covariance and correlation matrices, a scree plot, and a scatter plot, offering users valuable visual insights into the complexities of the soil dataset. Aims The primary objectives of this workflow are tailored to address specific challenges and goals inherent in the analysis of diverse soil samples: ∙Data transformation: Efficiently convert the initial SPSS file into a CSV format to enhance compatibility and ease of use. ∙Standardization and target specification: Standardize the dataset and designate the target variable, ensuring consistency and preparing the data for subsequent PCA. ∙PCA: Conduct PCA to explore patterns and variability within the soil dataset, facilitating a deeper understanding of the relationships between variables. ∙Graphical representations: Generate graphical outputs, such as covariance and correlation matrices, aiding users in visually interpreting the complexities of the soil dataset. Scientific questions This workflow addresses critical scientific questions related to soil analysis: ∙Variable importance: Identify variables contributing significantly to principal components through the covariance matrix and PCA. ∙Data structure: Explore correlations between variables and gain insights from the correlation matrix. ∙Optimal component number: Determine the optimal number of principal components using the scree plot for effective representation of data variance. ∙Target-related patterns: Analyze how selected principal components correlate with the target variable in the scatter plot, revealing patterns based on target variable values.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset accompanying the data descriptor for publication in Scientific Data entitled: Data on the prevalence of psychiatric symptoms in UK university students. More specifically, the current data provides crucial information concerning the prevalence of anxiety, depression, mania, insomnia, stress, suicidal ideation, psychotic experiences and loneliness amongst a sample of N=1408 UK university students. A cross-sectional online questionnaire-based study was implemented. Online recruitment for this dataset began on September 17th, 2018, and ended on the 30th July 2019. Eight validated measures were used: Generalized Anxiety Disorder Scale; Patient Health Questionnaire; The Mood Disorder Questionnaire; The Sleep Condition Indicator; The Perceived Stress Scale; Suicidal Behaviours Questionnaire-Revised; The Prodromal Questionnaire 16 (PQ-16); and the University of California Loneliness Scale.
Data in SPSS formatMeasured language variables across the cultural groups, in SPSS data file format.Data.savData in CSV formatEquivalent data to the SPSS upload, in CSV format.Data.csvAnalysis syntax for SPSSSyntax used to generate the reported results using SPSS.Syntax.sps
This dataset originates from a series of experimental studies titled “Tough on People, Tolerant to AI? Differential Effects of Human vs. AI Unfairness on Trust” The project investigates how individuals respond to unfair behavior (distributive, procedural, and interactional unfairness) enacted by artificial intelligence versus human agents, and how such behavior affects cognitive and affective trust.1 Experiment 1a: The Impact of AI vs. Human Distributive Unfairness on TrustOverview: This dataset comes from an experimental study aimed at examining how individuals respond in terms of cognitive and affective trust when distributive unfairness is enacted by either an artificial intelligence (AI) agent or a human decision-maker. Experiment 1a specifically focuses on the main effect of the “type of decision-maker” on trust.Data Generation and Processing: The data were collected through Credamo, an online survey platform. Initially, 98 responses were gathered from students at a university in China. Additional student participants were recruited via Credamo to supplement the sample. Attention check items were embedded in the questionnaire, and participants who failed were automatically excluded in real-time. Data collection continued until 202 valid responses were obtained. SPSS software was used for data cleaning and analysis.Data Structure and Format: The data file is named “Experiment1a.sav” and is in SPSS format. It contains 28 columns and 202 rows, where each row corresponds to one participant. Columns represent measured variables, including: grouping and randomization variables, one manipulation check item, four items measuring distributive fairness perception, six items on cognitive trust, five items on affective trust, three items for honesty checks, and four demographic variables (gender, age, education, and grade level). The final three columns contain computed means for distributive fairness, cognitive trust, and affective trust.Additional Information: No missing data are present. All variable names are labeled in English abbreviations to facilitate further analysis. The dataset can be directly opened in SPSS or exported to other formats.2 Experiment 1b: The Mediating Role of Perceived Ability and Benevolence (Distributive Unfairness)Overview: This dataset originates from an experimental study designed to replicate the findings of Experiment 1a and further examine the potential mediating role of perceived ability and perceived benevolence.Data Generation and Processing: Participants were recruited via the Credamo online platform. Attention check items were embedded in the survey to ensure data quality. Data were collected using a rolling recruitment method, with invalid responses removed in real time. A total of 228 valid responses were obtained.Data Structure and Format: The dataset is stored in a file named Experiment1b.sav in SPSS format and can be directly opened in SPSS software. It consists of 228 rows and 40 columns. Each row represents one participant’s data record, and each column corresponds to a different measured variable. Specifically, the dataset includes: random assignment and grouping variables; one manipulation check item; four items measuring perceived distributive fairness; six items on perceived ability; five items on perceived benevolence; six items on cognitive trust; five items on affective trust; three items for attention check; and three demographic variables (gender, age, and education). The last five columns contain the computed mean scores for perceived distributive fairness, ability, benevolence, cognitive trust, and affective trust.Additional Notes: There are no missing values in the dataset. All variables are labeled using standardized English abbreviations to facilitate reuse and secondary analysis. The file can be analyzed directly in SPSS or exported to other formats as needed.3 Experiment 2a: Differential Effects of AI vs. Human Procedural Unfairness on TrustOverview: This dataset originates from an experimental study aimed at examining whether individuals respond differently in terms of cognitive and affective trust when procedural unfairness is enacted by artificial intelligence versus human decision-makers. Experiment 2a focuses on the main effect of the decision agent on trust outcomes.Data Generation and Processing: Participants were recruited via the Credamo online survey platform from two universities located in different regions of China. A total of 227 responses were collected. After excluding those who failed the attention check items, 204 valid responses were retained for analysis. Data were processed and analyzed using SPSS software.Data Structure and Format: The dataset is stored in a file named Experiment2a.sav in SPSS format and can be directly opened in SPSS software. It contains 204 rows and 30 columns. Each row represents one participant’s response record, while each column corresponds to a specific variable. Variables include: random assignment and grouping; one manipulation check item; seven items measuring perceived procedural fairness; six items on cognitive trust; five items on affective trust; three attention check items; and three demographic variables (gender, age, and education). The final three columns contain computed average scores for procedural fairness, cognitive trust, and affective trust.Additional Notes: The dataset contains no missing values. All variables are labeled using standardized English abbreviations to facilitate reuse and secondary analysis. The file can be directly analyzed in SPSS or exported to other formats as needed.4 Experiment 2b: Mediating Role of Perceived Ability and Benevolence (Procedural Unfairness)Overview: This dataset comes from an experimental study designed to replicate the findings of Experiment 2a and to further examine the potential mediating roles of perceived ability and perceived benevolence in shaping trust responses under procedural unfairness.Data Generation and Processing: Participants were working adults recruited through the Credamo online platform. A rolling data collection strategy was used, where responses failing attention checks were excluded in real time. The final dataset includes 235 valid responses. All data were processed and analyzed using SPSS software.Data Structure and Format: The dataset is stored in a file named Experiment2b.sav, which is in SPSS format and can be directly opened using SPSS software. It contains 235 rows and 43 columns. Each row corresponds to a single participant, and each column represents a specific measured variable. These include: random assignment and group labels; one manipulation check item; seven items measuring procedural fairness; six items for perceived ability; five items for perceived benevolence; six items for cognitive trust; five items for affective trust; three attention check items; and three demographic variables (gender, age, education). The final five columns contain the computed average scores for procedural fairness, perceived ability, perceived benevolence, cognitive trust, and affective trust.Additional Notes: There are no missing values in the dataset. All variables are labeled using standardized English abbreviations to support future reuse and secondary analysis. The dataset can be directly analyzed in SPSS and easily converted into other formats if needed.5 Experiment 3a: Effects of AI vs. Human Interactional Unfairness on TrustOverview: This dataset comes from an experimental study that investigates how interactional unfairness, when enacted by either artificial intelligence or human decision-makers, influences individuals’ cognitive and affective trust. Experiment 3a focuses on the main effect of the “decision-maker type” under interactional unfairness conditions.Data Generation and Processing: Participants were college students recruited from two universities in different regions of China through the Credamo survey platform. After excluding responses that failed attention checks, a total of 203 valid cases were retained from an initial pool of 223 responses. All data were processed and analyzed using SPSS software.Data Structure and Format: The dataset is stored in the file named Experiment3a.sav, in SPSS format and compatible with SPSS software. It contains 203 rows and 27 columns. Each row represents a single participant, while each column corresponds to a specific measured variable. These include: random assignment and condition labels; one manipulation check item; four items measuring interactional fairness perception; six items for cognitive trust; five items for affective trust; three attention check items; and three demographic variables (gender, age, education). The final three columns contain computed average scores for interactional fairness, cognitive trust, and affective trust.Additional Notes: There are no missing values in the dataset. All variable names are provided using standardized English abbreviations to facilitate secondary analysis. The data can be directly analyzed using SPSS and exported to other formats as needed.6 Experiment 3b: The Mediating Role of Perceived Ability and Benevolence (Interactional Unfairness)Overview: This dataset comes from an experimental study designed to replicate the findings of Experiment 3a and further examine the potential mediating roles of perceived ability and perceived benevolence under conditions of interactional unfairness.Data Generation and Processing: Participants were working adults recruited via the Credamo platform. Attention check questions were embedded in the survey, and responses that failed these checks were excluded in real time. Data collection proceeded in a rolling manner until a total of 227 valid responses were obtained. All data were processed and analyzed using SPSS software.Data Structure and Format: The dataset is stored in the file named Experiment3b.sav, in SPSS format and compatible with SPSS software. It includes 227 rows and
File type: SPSS file (.sav)
Study type: Cross-sectional study
Population: Medical assistants in Germany
Study period: April 7th-April 14th, 2020
Number of participants: 2150
Research question: Investigation of pandemic-related attitudes, stressors and work outcomes among medical assistants during the SARS-CoV-2 (“Coronavirus”) pandemic
Missing values: None (due to online survey)
Original variables: v_982, v_1, v_2, v_3, v_5, v_6, v_7, v_13, v_14, v_21, v_22, v_23, v_24, v_26, v_27, v_28, v_29, v_31, v_32, v_33, v_40, v_41, v_42, v_43, v_46, v_47, v_48 v_49, v_52, v_57, Beruf_MFA
All other variables were calculated from the original variables either by rescaling or dichotomization.
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
The study is part of project “Designing effective extension service delivery systems for enhancing wider adoption of agricultural technologies”. The project aims to understand how extension services influence smallholder farmers’ decisions to adopt a new improved wheat variety, but also offer guidance to the Ethiopian government on alternative extension approaches and capacity gaps of development agents. The dataset contains the result of a survey conducted to assess the baseline conditions of the experiment, targeting 1663 Ethiopian farmers and categorizing them into model and non-model farmers. NOTE: The original dataset was stored in SPSS Statistics file format.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This folders contains three types of files. The first one is in excel format. The excel files consist of the raw data, descriptive data that describes the background of the participants, correlation and confirmatory Factor analysis outputs, and survey questionnaire in Malay and English. The second file type contains raw data in SPSS and the output in SPSS. The third file type contains the survey questionnaire; the items are bilingual.
Data records from the NYPD Stop, Question and Frisk Database. Data is made available in SPSS portable file format and Comma, Separated Value (CSV) format.
Data records from the NYPD Stop, Question and Frisk Database. Data is made available in SPSS portable file format and Comma, Separated Value (CSV) format.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
The dataset contains the results of a survey on Community Rangeland Management conducted in Syria in 2005, specifically rangeland field verification data collected at site and transect level. ICARDA, in collaboration with the Steppe Directorate of Syria and the Badia Rehabilitation Project, undertook a comprehensive survey of the Badia in 2005-06. The aim was to integrate production and socio-economic factors, to improve the capacity of stakeholders to develop technical and institutional interventions, and to enhance the sustainability of Bedouin’s livelihoods. NOTE: The original dataset was stored in SPSS Statistics file format, but it was not accessible without proper licence.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
BackgroundSchool closures in response to the COVID-19 impacted children’s education, protection, and wellbeing. After understanding these impacts and that children were not super spreaders, countries including Ethiopia decided to reopen schools with specified preconditions. But when deciding to reopen schools, the benefits and risks across education, public health and socio-economic factors have to be evaluated. However, there was information gap on status of schools as per preconditions. Hence, this study was designed to investigate status of schools in Southern Ethiopia.MethodsSchool based cross-sectional study was conducted in October 2020 in Southern Ethiopia. Sample of 430 schools were included. National school reopening guideline was used to develop checklist for assessment. Data was collected by public health experts at regional emergency operation center. Descriptive analysis was performed to summarize data.ResultsA total of 430 schools were included. More than two thirds, 298 (69.3%), of schools were from rural areas while 132 (30.7%) were from urban settings. The general infection prevention and water, sanitation and hygiene (IPC-WASH) status of schools were poor and COVID-19 specific preparations were inadequate to meet national preconditions to reopen schools during the pandemic. Total score from 24 items observed ranged from 3 to 22 points with mean score of 11.75 (SD±4.02). No school scored 100% and only 41 (9.5%) scored above 75% while 216 (50.2%%) scored below half point that is 12 items.ConclusionBoth the basic and COVID-19 specific IPC-WASH status of schools were inadequate to implement national school reopening preconditions and general standards. Some of strategies planned to accommodate teaching process and preconditions maximized inequalities in education. Although COVID-19 impact lessened due to vaccination and other factors, it is rational to consider fulfilling water and basic sanitation facilities to schools to prevent communicable diseases of public health importance.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Deidentified Data Set (SPSS Format). Legend: This file contains the same raw data in SPSS format, for statistical analysis and replication of findings.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
A SPSS file with data used in the statistical analysis. Covariates were excluded in the file due to restrictions of the ethical permission. However a complete file is provided for researchers after request at publication@ventorp.com. (SAV)