100+ datasets found
  1. o

    Directory of free, open psychological datasets

    • osf.io
    Updated Jan 5, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Cameron Brick; Laura Botzet; Cory Costello; Anatolia Batruch; Ruben Arslan; Melissa Kline Struhl; Nicolas Sommet; James Green; Michele Nuijten; Mark Conley; Thomas Richardson; Nicole Sorhagen; Anton Olsson-Collentine; Gilad Feldman; Franklin Feingold; Harry Manley; Michael Mullarkey; Tobias Dienlin; zhongyj; Christopher Madan (2024). Directory of free, open psychological datasets [Dataset]. http://doi.org/10.17605/OSF.IO/TH8EW
    Explore at:
    Dataset updated
    Jan 5, 2024
    Dataset provided by
    Center For Open Science
    Authors
    Cameron Brick; Laura Botzet; Cory Costello; Anatolia Batruch; Ruben Arslan; Melissa Kline Struhl; Nicolas Sommet; James Green; Michele Nuijten; Mark Conley; Thomas Richardson; Nicole Sorhagen; Anton Olsson-Collentine; Gilad Feldman; Franklin Feingold; Harry Manley; Michael Mullarkey; Tobias Dienlin; zhongyj; Christopher Madan
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description
  2. Raw data

    • figshare.com
    xlsx
    Updated Oct 22, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Maria Alejandra Pinero de Plaza (2020). Raw data [Dataset]. http://doi.org/10.6084/m9.figshare.13114604.v1
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Oct 22, 2020
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Maria Alejandra Pinero de Plaza
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    raw data sets

  3. m

    Biometric and psychometric life history indicators (criminals and control...

    • data.mendeley.com
    Updated Nov 24, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Monika Kwiek (2018). Biometric and psychometric life history indicators (criminals and control group) [Dataset]. http://doi.org/10.17632/9b5rtydnpk.1
    Explore at:
    Dataset updated
    Nov 24, 2018
    Authors
    Monika Kwiek
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The data files submitted here are related to the research, in which we compared psychological and biological indicators of life history strategies of criminals (N=84) and control group - men without criminal record (N=117), working as soldiers (N =32, the last 32 items in the dataset) and firefighters (N =85, the first 85 items in the dataset).

    We hypothesized that there would be differences in life history strategies employed by these two groups of subjects and we also expected that biological and psychological life history indicators used in the study would correlate with each other as, according to life history theory, they are reflections of one consistent life history strategy.

    We used two questionnaires: the Mini-K (Figueredo et al., 2006) used to assess psychological aspects of life history strategy and the questionnaire we created to measure biological life history variables such as age of the subjects’ parents at the appearance of their first child, father presence, number of biological siblings and step-siblings, twins in family, intervals between subsequent mother’s births, age at sexual onset, having children, age of becoming a father, number of offspring, number of women with whom the subjects have children and life expectancy. The research on criminals took place in medium-security correctional institution. Firefighters and soldiers participated in the study in their workplaces. All subjects were completing questionnaires in a paper-and-pencil version.The participation was voluntary.

    The results showed that criminals tended to employ faster life history strategies than men who have not been incarcerated, but this regularity only emerged in relation to biological variables. There were no intergroup differences in the context of psychological indicators of LH strategy measured by the Mini-K. Moreover, the overall correlation between the biological and psychological LH indicators used in this study was weak. Thus, in our study biological indicators proved to reliably reflect life history strategies of the subjects, in contrast to psychological variables.

    All statistical analysis was performed using SPSS and Statistica. Raw data as well as encoded data in SPSS format are attached.

    Figueredo, A.J., Vásquez, G., Brumbach, B.H., Schneider, S.M.R., Sefcek, J.A., Tal, I.R., Hill, D., Wenner, C.J., & Jacobs, W.J. (2006). Consilience and life history theory: From genes to brain to reproductive strategy. Developmental Review, 26, 243-275.

  4. r

    Book Transcripts

    • redivis.com
    Updated Aug 2, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stanford University Libraries (2025). Book Transcripts [Dataset]. https://redivis.com/datasets/4ew0-9qer43ndg
    Explore at:
    Dataset updated
    Aug 2, 2025
    Dataset authored and provided by
    Stanford University Libraries
    Time period covered
    Feb 21, 2023
    Description

    This is an auto-generated index table corresponding to a folder of files in this dataset with the same name. This table can be used to extract a subset of files based on their metadata, which can then be used for further analysis. You can view the contents of specific files by navigating to the "cells" tab and clicking on an individual file_kd.

  5. raw data.sav

    • figshare.com
    bin
    Updated Jul 27, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    qian qiu (2022). raw data.sav [Dataset]. http://doi.org/10.6084/m9.figshare.20380221.v1
    Explore at:
    binAvailable download formats
    Dataset updated
    Jul 27, 2022
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    qian qiu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Sample 1 was used for Exploratory Factor Analysis, Sample 2 was used for Confirmatory Factor Analysis.

  6. m

    Data from: A predictive model of the knowledge-sharing intentions of social...

    • data.mendeley.com
    • narcis.nl
    Updated Jul 9, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yang Cai (2021). A predictive model of the knowledge-sharing intentions of social Q&A community members: A regression tree approach [Dataset]. http://doi.org/10.17632/7ry2y9xwnz.1
    Explore at:
    Dataset updated
    Jul 9, 2021
    Authors
    Yang Cai
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Dataset and codes of the paper "Cai, Y., Yang, Y., & Shi, W. (2021). A predictive model of the knowledge-sharing intentions of social Q&A community members: A regression tree approach. International Journal of Human–Computer Interaction, 1-15. https://doi.org/10.1080/10447318.2021.1938393 ". Files: 1. codes.html: Codes to replicate the regression tree (HTML version) 2. codes.Rmd: Codes to replicate the regression tree (R version) 3. dataset.sav: Dataset incorporated into the decision tree 4. indicators calculation syntax.sps: spss syntax to calculate mean of variables 5. raw dataset.sav: raw data

  7. Supplementary

    • figshare.com
    Updated Mar 15, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Daiki Hiraoka (2021). Supplementary [Dataset]. http://doi.org/10.6084/m9.figshare.14050586.v2
    Explore at:
    Dataset updated
    Mar 15, 2021
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Daiki Hiraoka
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Raw data files and R markdown files to reproduce the supplementary information.

  8. Data Set C Subjects Responses

    • figshare.com
    • search.datacite.org
    xls
    Updated Jun 4, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kobi Snitz (2023). Data Set C Subjects Responses [Dataset]. http://doi.org/10.6084/m9.figshare.2058624.v3
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 4, 2023
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Kobi Snitz
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This data file contained the raw data for data set C in our paper “A cross modal performance-based measure of sensory stimuli intricacy”. These answers are averaged and then analyzed with the rest of the data sets in the script we supplied.All but subjects F21, F20, F15,F18 and M3 rated the odorants twice.

  9. Data Set B Subjects Responses

    • figshare.com
    xls
    Updated Jun 4, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kobi Snitz (2023). Data Set B Subjects Responses [Dataset]. http://doi.org/10.6084/m9.figshare.2058612.v2
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 4, 2023
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Kobi Snitz
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This data file contained the raw data for data set B in our paper “A cross modal performance-based measure of sensory stimuli intricacy”. These answers are averaged and then analyzed with the rest of the data sets in the script we supplied.All subjects rated the stimuli twice, except for subjects m3,m21,f14,f21 which rated only once

  10. Original Unedited Data Set

    • figshare.com
    zip
    Updated Jun 21, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jahne Coutts-Smith (2023). Original Unedited Data Set [Dataset]. http://doi.org/10.6084/m9.figshare.19687650.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 21, 2023
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Jahne Coutts-Smith
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This is the original unedited data set for the manuscript "The Role of Trait Mindfulness in the Association between Loneliness and Psychological Distress".

    The survey includes:

    Demographics The University of California Los Angeles Loneliness Scale–Version 3 (UCLA-LS; Russell, 1996) The Five-Facet Mindfulness Questionnaire–Short Form (FFMQ-SF; Bohlmeijer et al., 2011) The 21-item Depression, Anxiety, and Stress Scale (DASS-21; Lovibond & Lovibond, 1995) Questions regarding mindfulness and meditation practice Questions regarding relationships and home location and household composition Questions regarding the impact of COVID-19 measures on employment.

    References Bohlmeijer, E., ten Klooster, P. M., Fledderus, M., Veehof, M., & Baer, R. (2011). Psychometric properties of the Five Facet Mindfulness Questionnaire in depressed adults and development of a short form. Assessment, 18(3), 308-320. https://doi.org/10.1177/1073191111408231 Lovibond, S. H., & Lovibond, P. F. (1995). Manual for the Depression Anxiety Stress Scales (2nd ed.). Psychology Foundation. Russell, D. W. (1996). UCLA Loneliness Scale (Version 3): Reliability, validity, and factor structure. Journal of Personality Assessment, 66(1), 20-40. https://doi.org/10.1207/s15327752jpa6601_2

  11. Dataset for "Economic irrationality is optimal during noisy decision making"...

    • data.europa.eu
    unknown
    Updated Jul 3, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zenodo (2025). Dataset for "Economic irrationality is optimal during noisy decision making" [Dataset]. https://data.europa.eu/data/datasets/oai-zenodo-org-46426?locale=da
    Explore at:
    unknown(15934126)Available download formats
    Dataset updated
    Jul 3, 2025
    Dataset authored and provided by
    Zenodohttp://zenodo.org/
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description
    1. Excel Files “experiment*.xlsx” files contain raw data. In ‘Notes’ an explanation of the relevant variables is offered “Parameters.xlsx” file contains the best-fitting parameters of the selective integration model (the version that omits early noise). Each tab corresponds to a different experiment. 2. Matlab files Matlab files in the ‘code’ file allow one to reproduce simulation results in fig1b, fig3b and fig3d. The ‘selective_sim.m’ file offers a basic simulation of the selective integration model. For questions and further requests please contact Konstantinos Tsetsos: k.tsetsos62@gmail.com
  12. Z

    Data from: LifeSnaps: a 4-month multi-modal dataset capturing unobtrusive...

    • data.niaid.nih.gov
    Updated Oct 20, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yfantidou, Sofia (2022). LifeSnaps: a 4-month multi-modal dataset capturing unobtrusive snapshots of our lives in the wild [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_6826682
    Explore at:
    Dataset updated
    Oct 20, 2022
    Dataset provided by
    Ferrari, Elena
    Giakatos, Dimitrios Panteleimon
    Yfantidou, Sofia
    Efstathiou, Stefanos
    Karagianni, Christina
    Marchioro, Thomas
    Kazlouski, Andrei
    Girdzijauskas, Šarūnas
    Palotti, Joao
    Vakali, Athena
    Description

    LifeSnaps Dataset Documentation

    Ubiquitous self-tracking technologies have penetrated various aspects of our lives, from physical and mental health monitoring to fitness and entertainment. Yet, limited data exist on the association between in the wild large-scale physical activity patterns, sleep, stress, and overall health, and behavioral patterns and psychological measurements due to challenges in collecting and releasing such datasets, such as waning user engagement, privacy considerations, and diversity in data modalities. In this paper, we present the LifeSnaps dataset, a multi-modal, longitudinal, and geographically-distributed dataset, containing a plethora of anthropological data, collected unobtrusively for the total course of more than 4 months by n=71 participants, under the European H2020 RAIS project. LifeSnaps contains more than 35 different data types from second to daily granularity, totaling more than 71M rows of data. The participants contributed their data through numerous validated surveys, real-time ecological momentary assessments, and a Fitbit Sense smartwatch, and consented to make these data available openly to empower future research. We envision that releasing this large-scale dataset of multi-modal real-world data, will open novel research opportunities and potential applications in the fields of medical digital innovations, data privacy and valorization, mental and physical well-being, psychology and behavioral sciences, machine learning, and human-computer interaction.

    The following instructions will get you started with the LifeSnaps dataset and are complementary to the original publication.

    Data Import: Reading CSV

    For ease of use, we provide CSV files containing Fitbit, SEMA, and survey data at daily and/or hourly granularity. You can read the files via any programming language. For example, in Python, you can read the files into a Pandas DataFrame with the pandas.read_csv() command.

    Data Import: Setting up a MongoDB (Recommended)

    To take full advantage of the LifeSnaps dataset, we recommend that you use the raw, complete data via importing the LifeSnaps MongoDB database.

    To do so, open the terminal/command prompt and run the following command for each collection in the DB. Ensure you have MongoDB Database Tools installed from here.

    For the Fitbit data, run the following:

    mongorestore --host localhost:27017 -d rais_anonymized -c fitbit

    For the SEMA data, run the following:

    mongorestore --host localhost:27017 -d rais_anonymized -c sema

    For surveys data, run the following:

    mongorestore --host localhost:27017 -d rais_anonymized -c surveys

    If you have access control enabled, then you will need to add the --username and --password parameters to the above commands.

    Data Availability

    The MongoDB database contains three collections, fitbit, sema, and surveys, containing the Fitbit, SEMA3, and survey data, respectively. Similarly, the CSV files contain related information to these collections. Each document in any collection follows the format shown below:

    { _id: id (or user_id): type: data: }

    Each document consists of four fields: id (also found as user_id in sema and survey collections), type, and data. The _id field is the MongoDB-defined primary key and can be ignored. The id field refers to a user-specific ID used to uniquely identify each user across all collections. The type field refers to the specific data type within the collection, e.g., steps, heart rate, calories, etc. The data field contains the actual information about the document e.g., steps count for a specific timestamp for the steps type, in the form of an embedded object. The contents of the data object are type-dependent, meaning that the fields within the data object are different between different types of data. As mentioned previously, all times are stored in local time, and user IDs are common across different collections. For more information on the available data types, see the related publication.

    Surveys Encoding

    BREQ2

    Why do you engage in exercise?

        Code
        Text
    
    
        engage[SQ001]
        I exercise because other people say I should
    
    
        engage[SQ002]
        I feel guilty when I don’t exercise
    
    
        engage[SQ003]
        I value the benefits of exercise
    
    
        engage[SQ004]
        I exercise because it’s fun
    
    
        engage[SQ005]
        I don’t see why I should have to exercise
    
    
        engage[SQ006]
        I take part in exercise because my friends/family/partner say I should
    
    
        engage[SQ007]
        I feel ashamed when I miss an exercise session
    
    
        engage[SQ008]
        It’s important to me to exercise regularly
    
    
        engage[SQ009]
        I can’t see why I should bother exercising
    
    
        engage[SQ010]
        I enjoy my exercise sessions
    
    
        engage[SQ011]
        I exercise because others will not be pleased with me if I don’t
    
    
        engage[SQ012]
        I don’t see the point in exercising
    
    
        engage[SQ013]
        I feel like a failure when I haven’t exercised in a while
    
    
        engage[SQ014]
        I think it is important to make the effort to exercise regularly
    
    
        engage[SQ015]
        I find exercise a pleasurable activity
    
    
        engage[SQ016]
        I feel under pressure from my friends/family to exercise
    
    
        engage[SQ017]
        I get restless if I don’t exercise regularly
    
    
        engage[SQ018]
        I get pleasure and satisfaction from participating in exercise
    
    
        engage[SQ019]
        I think exercising is a waste of time
    

    PANAS

    Indicate the extent you have felt this way over the past week

        P1[SQ001]
        Interested
    
    
        P1[SQ002]
        Distressed
    
    
        P1[SQ003]
        Excited
    
    
        P1[SQ004]
        Upset
    
    
        P1[SQ005]
        Strong
    
    
        P1[SQ006]
        Guilty
    
    
        P1[SQ007]
        Scared
    
    
        P1[SQ008]
        Hostile
    
    
        P1[SQ009]
        Enthusiastic
    
    
        P1[SQ010]
        Proud
    
    
        P1[SQ011]
        Irritable
    
    
        P1[SQ012]
        Alert
    
    
        P1[SQ013]
        Ashamed
    
    
        P1[SQ014]
        Inspired
    
    
        P1[SQ015]
        Nervous
    
    
        P1[SQ016]
        Determined
    
    
        P1[SQ017]
        Attentive
    
    
        P1[SQ018]
        Jittery
    
    
        P1[SQ019]
        Active
    
    
        P1[SQ020]
        Afraid
    

    Personality

    How Accurately Can You Describe Yourself?

        Code
        Text
    
    
        ipip[SQ001]
        Am the life of the party.
    
    
        ipip[SQ002]
        Feel little concern for others.
    
    
        ipip[SQ003]
        Am always prepared.
    
    
        ipip[SQ004]
        Get stressed out easily.
    
    
        ipip[SQ005]
        Have a rich vocabulary.
    
    
        ipip[SQ006]
        Don't talk a lot.
    
    
        ipip[SQ007]
        Am interested in people.
    
    
        ipip[SQ008]
        Leave my belongings around.
    
    
        ipip[SQ009]
        Am relaxed most of the time.
    
    
        ipip[SQ010]
        Have difficulty understanding abstract ideas.
    
    
        ipip[SQ011]
        Feel comfortable around people.
    
    
        ipip[SQ012]
        Insult people.
    
    
        ipip[SQ013]
        Pay attention to details.
    
    
        ipip[SQ014]
        Worry about things.
    
    
        ipip[SQ015]
        Have a vivid imagination.
    
    
        ipip[SQ016]
        Keep in the background.
    
    
        ipip[SQ017]
        Sympathize with others' feelings.
    
    
        ipip[SQ018]
        Make a mess of things.
    
    
        ipip[SQ019]
        Seldom feel blue.
    
    
        ipip[SQ020]
        Am not interested in abstract ideas.
    
    
        ipip[SQ021]
        Start conversations.
    
    
        ipip[SQ022]
        Am not interested in other people's problems.
    
    
        ipip[SQ023]
        Get chores done right away.
    
    
        ipip[SQ024]
        Am easily disturbed.
    
    
        ipip[SQ025]
        Have excellent ideas.
    
    
        ipip[SQ026]
        Have little to say.
    
    
        ipip[SQ027]
        Have a soft heart.
    
    
        ipip[SQ028]
        Often forget to put things back in their proper place.
    
    
        ipip[SQ029]
        Get upset easily.
    
    
        ipip[SQ030]
        Do not have a good imagination.
    
    
        ipip[SQ031]
        Talk to a lot of different people at parties.
    
    
        ipip[SQ032]
        Am not really interested in others.
    
    
        ipip[SQ033]
        Like order.
    
    
        ipip[SQ034]
        Change my mood a lot.
    
    
        ipip[SQ035]
        Am quick to understand things.
    
    
        ipip[SQ036]
        Don't like to draw attention to myself.
    
    
        ipip[SQ037]
        Take time out for others.
    
    
        ipip[SQ038]
        Shirk my duties.
    
    
        ipip[SQ039]
        Have frequent mood swings.
    
    
        ipip[SQ040]
        Use difficult words.
    
    
        ipip[SQ041]
        Don't mind being the centre of attention.
    
    
        ipip[SQ042]
        Feel others' emotions.
    
    
        ipip[SQ043]
        Follow a schedule.
    
    
        ipip[SQ044]
        Get irritated easily.
    
    
        ipip[SQ045]
        Spend time reflecting on things.
    
    
        ipip[SQ046]
        Am quiet around strangers.
    
    
        ipip[SQ047]
        Make people feel at ease.
    
    
        ipip[SQ048]
        Am exacting in my work.
    
    
        ipip[SQ049]
        Often feel blue.
    
    
        ipip[SQ050]
        Am full of ideas.
    

    STAI

    Indicate how you feel right now

        Code
        Text
    
    
        STAI[SQ001]
        I feel calm
    
    
        STAI[SQ002]
        I feel secure
    
    
        STAI[SQ003]
        I am tense
    
    
        STAI[SQ004]
        I feel strained
    
    
        STAI[SQ005]
        I feel at ease
    
    
        STAI[SQ006]
        I feel upset
    
    
        STAI[SQ007]
        I am presently worrying over possible misfortunes
    
    
        STAI[SQ008]
        I feel satisfied
    
    
        STAI[SQ009]
        I feel frightened
    
    
        STAI[SQ010]
        I feel comfortable
    
    
        STAI[SQ011]
        I feel self-confident
    
    
        STAI[SQ012]
        I feel nervous
    
    
        STAI[SQ013]
        I am jittery
    
    
        STAI[SQ014]
        I feel indecisive
    
    
        STAI[SQ015]
        I am relaxed
    
    
        STAI[SQ016]
        I feel content
    
    
        STAI[SQ017]
        I am worried
    
    
        STAI[SQ018]
        I feel confused
    
    
        STAI[SQ019]
        I feel steady
    
    
        STAI[SQ020]
        I feel pleasant
    

    TTM

    Do you engage in regular physical activity according to the definition above? How frequently did each event or experience occur in the past month?

        Code
        Text
    
    
        processes[SQ002]
        I read articles to learn more about physical
    
  13. Income, Wellbeing, Love for Money Raw Data (.csv)

    • figshare.com
    txt
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Siddharth Garg (2023). Income, Wellbeing, Love for Money Raw Data (.csv) [Dataset]. http://doi.org/10.6084/m9.figshare.8869040.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Siddharth Garg
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    The dataset consists of 113 responses directly taken from a Google Form survey consisting of four demographic questions (age, sex, country, income), a single item on Love for Money, and WHO-5 Wellbeing Questionnaire. This is a completely raw , anonymous dataset. This data was collected as part of a study examining the relationship between income and wellbeing mediated/moderated by love for money.

  14. r

    Publication Metadata

    • redivis.com
    Updated Aug 2, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stanford University Libraries (2025). Publication Metadata [Dataset]. https://redivis.com/datasets/4ew0-9qer43ndg
    Explore at:
    Dataset updated
    Aug 2, 2025
    Dataset authored and provided by
    Stanford University Libraries
    Description

    The table Publication Metadata is part of the dataset Counseling and Psychotherapy Transcripts: Volume I [full text data], available at https://stanford.redivis.com/datasets/4ew0-9qer43ndg. It contains 70567 rows across 43 variables.

  15. m

    Data from: Looking beyond the mirror: Psychological distress; disordered...

    • data.mendeley.com
    • researchdata.edu.au
    Updated Feb 1, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Natalie Skead (2019). Looking beyond the mirror: Psychological distress; disordered eating, weight and shape concerns; and maladaptive eating habits in lawyers and law students [Dataset]. http://doi.org/10.17632/dfwxdgfwbc.1
    Explore at:
    Dataset updated
    Feb 1, 2019
    Authors
    Natalie Skead
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This is the raw data files for the article: Skead, N. K., Rogers, S. L., Doraisamy, J. (2018). Looking beyond the mirror: psychological distress; disordered eating, weight and shape concerns; and maladaptive eating habits in lawyers and law students. International Journal of Law and Psychiatry.

  16. e

    Open data: Frequency mismatch negativity and visual load

    • data.europa.eu
    • researchdata.se
    • +1more
    unknown
    Updated Aug 28, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stockholms universitet (2019). Open data: Frequency mismatch negativity and visual load [Dataset]. https://data.europa.eu/data/datasets/https-doi-org-10-17045-sthlmuni-7016369?locale=bg
    Explore at:
    unknownAvailable download formats
    Dataset updated
    Aug 28, 2019
    Dataset authored and provided by
    Stockholms universitet
    Description

    Wiens, S., van Berlekom, E., Szychowska, M., & Eklund, R. (2019). Visual Perceptual Load Does Not Affect the Frequency Mismatch Negativity. Frontiers in Psychology, 10(1970). doi:10.3389/fpsyg.2019.01970

    We manipulated visual perceptual load (high and low load) while we recorded electroencephalography. Event-related potentials (ERPs) were computed from these data.

    OSF_*.pdf contains the preregistration at open science framework (osf). https://doi.org/10.17605/OSF.IO/EWG9X

    ERP_2019_rawdata_bdf.zip contains the raw eeg data files that were recorded with a biosemi system (www.biosemi.com). The files can be opened in matlab with the fieldtrip toolbox. https://www.mathworks.com/products/matlab.html http://www.fieldtriptoolbox.org/

    ERP_2019_visual_load_fieldtrip_scripts.zip contains all the matlab scripts that were used to process the ERP data with the toolbox fieldtrip. http://www.fieldtriptoolbox.org/

    ERP_2019_fieldtrip_mat_*.zip contain the final, preprocessed individual data files. They can be opened with matlab.

    ERP_2019_visual_load_python_scripts.zip contains the python scripts for the main task. They need python (https://www.python.org/) and psychopy (http://www.psychopy.org/)

    ERP_2019_visual_load_wmc_R_scripts.zip contains the R scripts to process the working memory capacity (wmc) data. https://www.r-project.org/.

    ERP_2019_visual_load_R_scripts.zip contains the R scripts to analyze the data and the output files with figures (eg scatterplots). https://www.r-project.org/.

  17. Adaptive Interaction data

    • search.datacite.org
    • figshare.com
    Updated Aug 23, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Maggie Ellis (2017). Adaptive Interaction data [Dataset]. http://doi.org/10.6084/m9.figshare.5161384.v2
    Explore at:
    Dataset updated
    Aug 23, 2017
    Dataset provided by
    DataCitehttps://www.datacite.org/
    figshare
    Figsharehttp://figshare.com/
    Authors
    Maggie Ellis
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Raw data set for Adaptive Interaction study

  18. f

    Open data: Visual load effects on the auditory steady-state responses to...

    • su.figshare.com
    • researchdata.se
    • +1more
    txt
    Updated May 30, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stefan Wiens; Malina Szychowska (2023). Open data: Visual load effects on the auditory steady-state responses to 20-, 40-, and 80-Hz amplitude-modulated tones [Dataset]. http://doi.org/10.17045/sthlmuni.12582002.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    Stockholm University
    Authors
    Stefan Wiens; Malina Szychowska
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The main results file are saved separately:- ASSR2.html: R output of the main analyses (N = 33)- ASSR2_subset.html: R output of the main analyses for the smaller sample (N = 25)FIGSHARE METADATACategories- Biological psychology- Neuroscience and physiological psychology- Sensory processes, perception, and performanceKeywords- crossmodal attention- electroencephalography (EEG)- early-filter theory- task difficulty- envelope following responseReferences- https://doi.org/10.17605/OSF.IO/6FHR8- https://github.com/stamnosslin/mn- https://doi.org/10.17045/sthlmuni.4981154.v3- https://biosemi.com/- https://www.python.org/- https://mne.tools/stable/index.html#- https://www.r-project.org/- https://rstudio.com/products/rstudio/GENERAL INFORMATION1. Title of Dataset:Open data: Visual load effects on the auditory steady-state responses to 20-, 40-, and 80-Hz amplitude-modulated tones2. Author Information A. Principal Investigator Contact Information Name: Stefan Wiens Institution: Department of Psychology, Stockholm University, Sweden Internet: https://www.su.se/profiles/swiens-1.184142 Email: sws@psychology.su.se B. Associate or Co-investigator Contact Information Name: Malina Szychowska Institution: Department of Psychology, Stockholm University, Sweden Internet: https://www.researchgate.net/profile/Malina_Szychowska Email: malina.szychowska@psychology.su.se3. Date of data collection: Subjects (N = 33) were tested between 2019-11-15 and 2020-03-12.4. Geographic location of data collection: Department of Psychology, Stockholm, Sweden5. Information about funding sources that supported the collection of the data:Swedish Research Council (Vetenskapsrådet) 2015-01181SHARING/ACCESS INFORMATION1. Licenses/restrictions placed on the data: CC BY 4.02. Links to publications that cite or use the data: Szychowska M., & Wiens S. (2020). Visual load effects on the auditory steady-state responses to 20-, 40-, and 80-Hz amplitude-modulated tones. Submitted manuscript.The study was preregistered:https://doi.org/10.17605/OSF.IO/6FHR83. Links to other publicly accessible locations of the data: N/A4. Links/relationships to ancillary data sets: N/A5. Was data derived from another source? No 6. Recommended citation for this dataset: Wiens, S., & Szychowska M. (2020). Open data: Visual load effects on the auditory steady-state responses to 20-, 40-, and 80-Hz amplitude-modulated tones. Stockholm: Stockholm University. https://doi.org/10.17045/sthlmuni.12582002DATA & FILE OVERVIEWFile List:The files contain the raw data, scripts, and results of main and supplementary analyses of an electroencephalography (EEG) study. Links to the hardware and software are provided under methodological information.ASSR2_experiment_scripts.zip: contains the Python files to run the experiment. ASSR2_rawdata.zip: contains raw datafiles for each subject- data_EEG: EEG data in bdf format (generated by Biosemi)- data_log: logfiles of the EEG session (generated by Python)ASSR2_EEG_scripts.zip: Python-MNE scripts to process the EEG dataASSR2_EEG_preprocessed_data.zip: EEG data in fif format after preprocessing with Python-MNE scriptsASSR2_R_scripts.zip: R scripts to analyze the data together with the main datafiles. The main files in the folder are: - ASSR2.html: R output of the main analyses- ASSR2_subset.html: R output of the main analyses but after excluding eight subjects who were recorded as pilots before preregistering the studyASSR2_results.zip: contains all figures and tables that are created by Python-MNE and R.METHODOLOGICAL INFORMATION1. Description of methods used for collection/generation of data:The auditory stimuli were amplitude-modulated tones with a carrier frequency (fc) of 500 Hz and modulation frequencies (fm) of 20.48 Hz, 40.96 Hz, or 81.92 Hz. The experiment was programmed in python: https://www.python.org/ and used extra functions from here: https://github.com/stamnosslin/mnThe EEG data were recorded with an Active Two BioSemi system (BioSemi, Amsterdam, Netherlands; www.biosemi.com) and saved in .bdf format.For more information, see linked publication.2. Methods for processing the data:We conducted frequency analyses and computed event-related potentials. See linked publication3. Instrument- or software-specific information needed to interpret the data:MNE-Python (Gramfort A., et al., 2013): https://mne.tools/stable/index.html#Rstudio used with R (R Core Team, 2020): https://rstudio.com/products/rstudio/Wiens, S. (2017). Aladins Bayes Factor in R (Version 3). https://www.doi.org/10.17045/sthlmuni.4981154.v34. Standards and calibration information, if appropriate:For information, see linked publication.5. Environmental/experimental conditions:For information, see linked publication.6. Describe any quality-assurance procedures performed on the data:For information, see linked publication.7. People involved with sample collection, processing, analysis and/or submission:- Data collection: Malina Szychowska with assistance from Jenny Arctaedius.- Data processing, analysis, and submission: Malina Szychowska and Stefan WiensDATA-SPECIFIC INFORMATION:All relevant information can be found in the MNE-Python and R scripts (in EEG_scripts and analysis_scripts folders) that process the raw data. For example, we added notes to explain what different variables mean.

  19. U

    USPTO PatentsView Data

    • dataverse.lib.virginia.edu
    zip
    Updated Dec 14, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    University of Virginia Dataverse (2022). USPTO PatentsView Data [Dataset]. http://doi.org/10.18130/V3/YOTFLM
    Explore at:
    zip(82168867), zip(1502), zip(55447027), zip(5440740), zip(31305926), zip(1652995365), zip(281237896)Available download formats
    Dataset updated
    Dec 14, 2022
    Dataset provided by
    University of Virginia Dataverse
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Time period covered
    1976 - 2021
    Description

    This is raw data downloaded from USPTO (US Patent and Trademark Office) PatentsView (https://patentsview.org/download/data-download-tables) on 8/17/22. Last updated date from PatentsView: - all datasets except inventor: 3/29/22 - inventor data set: 5/22/22

  20. C

    Data from: Spanish sample responses to the Statistical Anxiety Scale...

    • dataverse.csuc.cat
    txt
    Updated Feb 1, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Urbano Lorenzo-Seva; Urbano Lorenzo-Seva; Andreu Vigil-Colet; Andreu Vigil-Colet; Pere J. Ferrando; Pere J. Ferrando (2024). Spanish sample responses to the Statistical Anxiety Scale psychological test [Dataset]. http://doi.org/10.34810/data1075
    Explore at:
    txt(147467), txt(6837), txt(5285)Available download formats
    Dataset updated
    Feb 1, 2024
    Dataset provided by
    CORA.Repositori de Dades de Recerca
    Authors
    Urbano Lorenzo-Seva; Urbano Lorenzo-Seva; Andreu Vigil-Colet; Andreu Vigil-Colet; Pere J. Ferrando; Pere J. Ferrando
    License

    Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
    License information was derived automatically

    Time period covered
    Jan 1, 2019 - Jan 25, 2024
    Area covered
    Spain
    Dataset funded by
    https://ror.org/003x0zc53
    Description

    Raw data for participants that answered the Statistical Anxiety Test. A sample of 681 undergraduate Spanish students (80% women) of a Grade in Psychology, aged 18-60 years (M = 20.5, SD = 4.9), completed the computer version of the test. Raw responses and the time to response each item were recorded. The test included a scale to assess social desirability In addition, the examination mark in a test about statistic was recorded for a set or participants (N=430). Total scores were computed as an addition to item responses. ORION factor score estimates in the scales of the test were obtained using factor analysis (ULS extraction, Robust Promin, and ORION scores estimates). The software used to compute the factor analysis was FACTOR. The information provided is: sample descriptives, participants' responses, item response times, and participants' scores in test scales.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Cameron Brick; Laura Botzet; Cory Costello; Anatolia Batruch; Ruben Arslan; Melissa Kline Struhl; Nicolas Sommet; James Green; Michele Nuijten; Mark Conley; Thomas Richardson; Nicole Sorhagen; Anton Olsson-Collentine; Gilad Feldman; Franklin Feingold; Harry Manley; Michael Mullarkey; Tobias Dienlin; zhongyj; Christopher Madan (2024). Directory of free, open psychological datasets [Dataset]. http://doi.org/10.17605/OSF.IO/TH8EW

Directory of free, open psychological datasets

Explore at:
4 scholarly articles cite this dataset (View in Google Scholar)
Dataset updated
Jan 5, 2024
Dataset provided by
Center For Open Science
Authors
Cameron Brick; Laura Botzet; Cory Costello; Anatolia Batruch; Ruben Arslan; Melissa Kline Struhl; Nicolas Sommet; James Green; Michele Nuijten; Mark Conley; Thomas Richardson; Nicole Sorhagen; Anton Olsson-Collentine; Gilad Feldman; Franklin Feingold; Harry Manley; Michael Mullarkey; Tobias Dienlin; zhongyj; Christopher Madan
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description
Search
Clear search
Close search
Google apps
Main menu