6 datasets found
  1. Z

    Project Panormos Archaeological Survey: Archaeological Data (survey-data)

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jul 17, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wilkinson, Toby C (2020). Project Panormos Archaeological Survey: Archaeological Data (survey-data) [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_1039979
    Explore at:
    Dataset updated
    Jul 17, 2020
    Dataset provided by
    Wilkinson, Toby C
    Slawisch, Anja
    Strupler, Néhémie
    Description

    This forms part of the preliminary open data release for the Project Panormos archaeological survey.

    This "panormos/survey-data" repository contains archaeological data collected as part of the Project Panormos intensive archaeological survey, including tract data (pottery counts, environment, etc.), POI (points-of-interest), finds descriptions and contexts, and photo metadata. Scripts to correct raw imported data files (e.g. from ODK software) into the "canonical" versions are also made available here.

    Release 0.2.0 includes data from the 2015, 2017 and 2019 seasons. It is a pre-publication or "observation" version. No derivative works should be made until the expiry of the observation phase: please see enclosed LICENSE file for details.

    Release 0.1.0 includes data from the 2015 season. It is a pre-publication or "observation" version. No derivative works should be made until the expiry of the observation phase: please see enclosed LICENSE file for details.

    Releases below 1.0.0 represent preprint working datasets before final publication. Although every effort has been made to reduce errors and make the datasets available in a form that should be easy to navigate, the status of the data as a form of "beta" should be borne in mind.

  2. Z

    Open Context Database SQL Dump

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jan 23, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kansa, Sarah Whitcher (2025). Open Context Database SQL Dump [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_14728228
    Explore at:
    Dataset updated
    Jan 23, 2025
    Dataset provided by
    Kansa, Eric
    Kansa, Sarah Whitcher
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Open Context (https://opencontext.org) publishes free and open access research data for archaeology and related disciplines. An open source (but bespoke) Django (Python) application supports these data publishing services. The software repository is here: https://github.com/ekansa/open-context-py

    The Open Context team runs ETL (extract, transform, load) workflows to import data contributed by researchers from various source relational databases and spreadsheets. Open Context uses PostgreSQL (https://www.postgresql.org) relational database to manage these imported data in a graph style schema. The Open Context Python application interacts with the PostgreSQL database via the Django Object-Relational-Model (ORM).

    This database dump includes all published structured data organized used by Open Context (table names that start with 'oc_all_'). The binary media files referenced by these structured data records are stored elsewhere. Binary media files for some projects, still in preparation, are not yet archived with long term digital repositories.

    These data comprehensively reflect the structured data currently published and publicly available on Open Context. Other data (such as user and group information) used to run the Website are not included.

    IMPORTANT

    This database dump contains data from roughly 190+ different projects. Each project dataset has its own metadata and citation expectations. If you use these data, you must cite each data contributor appropriately, not just this Zenodo archived database dump.

  3. e

    Online survey data for the 2017 Aesthetic value project (NESP TWQ 3.2.3,...

    • catalogue.eatlas.org.au
    Updated Nov 22, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Australian Institute of Marine Science (AIMS) (2019). Online survey data for the 2017 Aesthetic value project (NESP TWQ 3.2.3, Griffith Institute for Tourism Research) [Dataset]. https://catalogue.eatlas.org.au/geonetwork/srv/api/records/595f79c7-b553-4aab-9ad8-42c092508f81
    Explore at:
    www:link-1.0-http--downloaddata, www:link-1.0-http--relatedAvailable download formats
    Dataset updated
    Nov 22, 2019
    Dataset provided by
    Australian Institute of Marine Science (AIMS)
    Time period covered
    Jan 28, 2017 - Jan 28, 2018
    Description

    This dataset consists of three data folders including all related documents of the online survey conducted within the NESP 3.2.3 project (Tropical Water Quality Hub) and a survey format document representing how the survey was designed. Apart from participants’ demographic information, the survey consists of three sections: conjoint analysis, picture rating and open question. Correspondent outcome of these three sections are downloaded from Qualtrics website and used for three different data analysis processes.

    Related data to the first section “conjoint analysis” is saved in the Conjoint analysis folder which contains two sub-folders. The first one includes a plan file of SAV. Format representing the design suggestion by SPSS orthogonal analysis for testing beauty factors and 9 photoshoped pictures used in the survey. The second (i.e. Final results) contains 1 SAV. file named “data1” which is the imported results of conjoint analysis section in SPSS, 1 SPS. file named “Syntax1” representing the code used to run conjoint analysis, 2 SAV. files as the output of conjoint analysis by SPSS, and 1 SPV file named “Final output” showing results of further data analysis by SPSS on the basis of utility and importance data.

    Related data to the second section “Picture rating” is saved into Picture rating folder including two subfolders. One subfolder contains 2500 pictures of Great Barrier Reef used in the rating survey section. These pictures are organised by named and stored in two folders named as “Survey Part 1” and “Survey Part 2” which are correspondent with two parts of the rating survey sections. The other subfolder “Rating results” consist of one XLSX. file representing survey results downloaded from Qualtric website.

    Finally, related data to the open question is saved in “Open question” folder. It contains one csv. file and one PDF. file recording participants’ answers to the open question as well as one PNG. file representing a screenshot of Leximancer analysis outcome.

    Methods: This dataset resulted from the input and output of an online survey regarding how people assess the beauty of Great Barrier Reef. This survey was designed for multiple purposes including three main sections: (1) conjoint analysis (ranking 9 photoshopped pictures to determine the relative importance weights of beauty attributes), (2) picture rating (2500 pictures to be rated) and (3) open question on the factors that makes a picture of the Great Barrier Reef beautiful in participants’ opinion (determining beauty factors from tourist perspective). Pictures used in this survey were downloaded from public sources such as websites of the Tourism and Events Queensland and Tropical Tourism North Queensland as well as tourist sharing sources (i.e. Flickr). Flickr pictures were downloaded using the key words “Great Barrier Reef”. About 10,000 pictures were downloaded in August and September 2017. 2,500 pictures were then selected based on several research criteria: (1) underwater pictures of GBR, (2) without humans, (3) viewed from 1-2 metres from objects and (4) of high resolution.

    The survey was created on Qualtrics website and launched on 4th October 2017 using Qualtrics survey service. Each participant rated 50 pictures randomly selected from the pool of 2500 survey pictures. 772 survey completions were recorded and 705 questionnaires were eligible for data analysis after filtering unqualified questionnaires. Conjoint analysis data was imported to IBM SPSS using SAV. format and the output was saved using SPV. format. Automatic aesthetic rating of 2500 Great Barrier Reef pictures –all these pictures are rated (1 – 10 scale) by at least 10 participants and this dataset was saved in a XLSX. file which is used to train and test an Artificial Intelligence (AI)-based system recognising and assessing the beauty of natural scenes. Answers of the open-question were saved in a XLSX. file and a PDF. file to be employed for theme analysis by Leximancer software.

    Further information can be found in the following publication: Becken, S., Connolly R., Stantic B., Scott N., Mandal R., Le D., (2018), Monitoring aesthetic value of the Great Barrier Reef by using innovative technologies and artificial intelligence, Griffith Institute for Tourism Research Report No 15.

    Format: The Online survey dataset includes one PDF file representing the survey format with all sections and questions. It also contains three subfolders, each has multiple files. The subfolder of Conjoint analysis contains an image of the 9 JPG. Pictures, 1 SAV. format file for the Orthoplan subroutine outcome and 5 outcome documents (i.e. 3 SAV. files, 1 SPS. file, 1 SPV. file). The subfolder of Picture rating contains a capture of the 2500 pictures used in the survey, 1 excel file for rating results. The subfolder of Open question includes 1 CSV. file, 1 PDF. file representing participants’ answers and one PNG. file for the analysis outcome.

    Data Dictionary:

    Card 1: Picture design option number 1 suggested by SPSS orthogonal analysis. Importance value: The relative importance weight of each beauty attribute calculated by SPSS conjoint analysis. Utility: Score reflecting influential valence and degree of each beauty attribute on beauty score. Syntax: Code used to run conjoint analysis by SPSS Leximancer: Specialised software for qualitative data analysis. Concept map: A map showing the relationship between concepts identified Q1_1: Beauty score of the picture Q1_1 by the correspondent participant (i.e. survey part 1) Q2.1_1: Beauty score of the picture Q2.1_1 by the correspondent participant (i.e. survey part 2) Conjoint _1: Ranking of the picture 1 designed for conjoint analysis by the correspondent participant

    References: Becken, S., Connolly R., Stantic B., Scott N., Mandal R., Le D., (2018), Monitoring aesthetic value of the Great Barrier Reef by using innovative technologies and artificial intelligence, Griffith Institute for Tourism Research Report No 15.

    Data Location:

    This dataset is filed in the eAtlas enduring data repository at: data esp3\3.2.3_Aesthetic-value-GBR

  4. Instagram Reach Analysis - Excel Project

    • kaggle.com
    Updated Jun 14, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Raghad Al-marshadi (2025). Instagram Reach Analysis - Excel Project [Dataset]. https://www.kaggle.com/datasets/raghadalmarshadi/instagram-reach-analysis-excel-project/code
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jun 14, 2025
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Raghad Al-marshadi
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    📊 Instagram Reach Analysis | تحليل الوصول في إنستغرام

    An exploratory data analysis project using Excel to understand what influences Instagram post reach and engagement.
    مشروع تحليل استكشافي لفهم العوامل المؤثرة في وصول منشورات إنستغرام وتفاعل المستخدمين، باستخدام Excel.

    📁 Project Description | وصف المشروع

    This project uses an Instagram dataset imported from Kaggle to explore how different factors like hashtags, saves, shares, and caption length influence impressions and engagement.
    يستخدم هذا المشروع بيانات من إنستغرام تم استيرادها من منصة Kaggle لتحليل كيف تؤثر عوامل مثل الهاشتاقات، الحفظ، المشاركة، وطول التسمية التوضيحية في عدد مرات الظهور والتفاعل.

    🛠️ Tools Used | الأدوات المستخدمة

    • Microsoft Excel
    • Pivot Tables
    • TRIM, WRAP, and other Excel formulas
    • مايكروسوفت إكسل
    • الجداول المحورية
    • دوال مثل TRIM و WRAP وغيرها في Excel

    🧹 Data Cleaning | تنظيف البيانات

    • Removed unnecessary spaces using TRIM
    • Removed 17 duplicate rows → 103 unique rows remained
    • Standardized formatting: freeze top row, wrap text, center align

    • إزالة المسافات غير الضرورية باستخدام TRIM

    • حذف 17 صفًا مكررًا → تبقى 103 صفوف فريدة

    • تنسيق موحد: تثبيت الصف الأول، لف النص، وتوسيط المحتوى

    🔍 Key Analysis Highlights | أبرز نتائج التحليل

    1. Impressions by Source | مرات الظهور حسب المصدر

    • Highest reach: Home > Hashtags > Explore > Other
    • Some totals exceed 100% due to overlapping

    2. Engagement Insights | رؤى حول التفاعل

    • Saves strongly correlate with higher impressions
    • Caption length is inversely related to likes
    • Shares have weak correlation with impressions

    3. Hashtag Patterns | تحليل الهاشتاقات

    • Most used: #Thecleverprogrammer, #Amankharwal, #Python
    • Repeating hashtags does not guarantee higher reach

    ✅ Conclusion | الخلاصة

    Shorter captions and higher save counts contribute more to reach than repeated hashtags. Profile visits are often linked to new followers.
    العناوين القصيرة وعدد الحفظات تلعب دورًا أكبر في الوصول من تكرار الهاشتاقات. كما أن زيارات الملف الشخصي ترتبط غالبًا بزيادة المتابعين.

    👩‍💻 Author | المؤلفة

    Raghad's LinkedIn

    🧠 Inspiration | الإلهام

    Inspired by content from TheCleverProgrammer, Aman Kharwal, and Kaggle datasets.
    استُلهم المشروع من محتوى TheCleverProgrammer وأمان خروال، وبيانات من Kaggle.

    💬 Feedback | الملاحظات

    Feel free to open an issue or share suggestions!
    يسعدنا تلقي ملاحظاتكم واقتراحاتكم عبر صفحة المشروع.

  5. z

    Open Context Database SQL Dump and Parquet Exports

    • zenodo.org
    bin, zip
    Updated Jun 25, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Eric Kansa; Eric Kansa; Sarah Whitcher Kansa; Sarah Whitcher Kansa (2025). Open Context Database SQL Dump and Parquet Exports [Dataset]. http://doi.org/10.5281/zenodo.15732000
    Explore at:
    bin, zipAvailable download formats
    Dataset updated
    Jun 25, 2025
    Dataset provided by
    Open Context
    Authors
    Eric Kansa; Eric Kansa; Sarah Whitcher Kansa; Sarah Whitcher Kansa
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Open Context (https://opencontext.org) publishes free and open access research data for archaeology and related disciplines. An open source (but bespoke) Django (Python) application supports these data publishing services. The software repository is here: https://github.com/ekansa/open-context-py (the "production" branch is the one used for Open Context's primary public deployment).

    We also provide a Docker based approach for installing Open Context via this code repository: https://github.com/opencontext/oc-docker (the "production" branch installs the branch of code used for Open Context's primary public deployment).

    The Open Context team runs ETL (extract, transform, load) workflows to import data contributed by researchers from various source relational databases and spreadsheets. Open Context uses PostgreSQL (https://www.postgresql.org) relational database to manage these imported data in a graph style schema. The Open Context Python application interacts with the PostgreSQL database via the Django Object-Relational-Model (ORM).

    This database dump includes all published structured data organized used by Open Context (table names that start with 'oc_all_'). The binary media files referenced by these structured data records are stored elsewhere. Binary media files for some projects, still in preparation, are not yet archived with long term digital repositories.

    These data comprehensively reflect the structured data currently published and publicly available on Open Context. Other data (such as user and group information) used to run the Website are not included. The data are provided in a plain text SQL dump (for restoration into a version 14+ PostgreSQL database) and in the non-proprietary (but binary) parquet file format.

    IMPORTANT

    This database dump contains data from roughly 190+ different projects. Each project dataset has its own metadata and citation expectations. If you use these data, you must cite each data contributor appropriately, not just this Zenodo archived database dump.

  6. Data from: LifeSnaps: a 4-month multi-modal dataset capturing unobtrusive...

    • zenodo.org
    • explore.openaire.eu
    zip
    Updated Oct 20, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sofia Yfantidou; Sofia Yfantidou; Christina Karagianni; Stefanos Efstathiou; Stefanos Efstathiou; Athena Vakali; Athena Vakali; Joao Palotti; Joao Palotti; Dimitrios Panteleimon Giakatos; Dimitrios Panteleimon Giakatos; Thomas Marchioro; Thomas Marchioro; Andrei Kazlouski; Elena Ferrari; Šarūnas Girdzijauskas; Šarūnas Girdzijauskas; Christina Karagianni; Andrei Kazlouski; Elena Ferrari (2022). LifeSnaps: a 4-month multi-modal dataset capturing unobtrusive snapshots of our lives in the wild [Dataset]. http://doi.org/10.5281/zenodo.6832242
    Explore at:
    zipAvailable download formats
    Dataset updated
    Oct 20, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Sofia Yfantidou; Sofia Yfantidou; Christina Karagianni; Stefanos Efstathiou; Stefanos Efstathiou; Athena Vakali; Athena Vakali; Joao Palotti; Joao Palotti; Dimitrios Panteleimon Giakatos; Dimitrios Panteleimon Giakatos; Thomas Marchioro; Thomas Marchioro; Andrei Kazlouski; Elena Ferrari; Šarūnas Girdzijauskas; Šarūnas Girdzijauskas; Christina Karagianni; Andrei Kazlouski; Elena Ferrari
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    LifeSnaps Dataset Documentation

    Ubiquitous self-tracking technologies have penetrated various aspects of our lives, from physical and mental health monitoring to fitness and entertainment. Yet, limited data exist on the association between in the wild large-scale physical activity patterns, sleep, stress, and overall health, and behavioral patterns and psychological measurements due to challenges in collecting and releasing such datasets, such as waning user engagement, privacy considerations, and diversity in data modalities. In this paper, we present the LifeSnaps dataset, a multi-modal, longitudinal, and geographically-distributed dataset, containing a plethora of anthropological data, collected unobtrusively for the total course of more than 4 months by n=71 participants, under the European H2020 RAIS project. LifeSnaps contains more than 35 different data types from second to daily granularity, totaling more than 71M rows of data. The participants contributed their data through numerous validated surveys, real-time ecological momentary assessments, and a Fitbit Sense smartwatch, and consented to make these data available openly to empower future research. We envision that releasing this large-scale dataset of multi-modal real-world data, will open novel research opportunities and potential applications in the fields of medical digital innovations, data privacy and valorization, mental and physical well-being, psychology and behavioral sciences, machine learning, and human-computer interaction.

    The following instructions will get you started with the LifeSnaps dataset and are complementary to the original publication.

    Data Import: Reading CSV

    For ease of use, we provide CSV files containing Fitbit, SEMA, and survey data at daily and/or hourly granularity. You can read the files via any programming language. For example, in Python, you can read the files into a Pandas DataFrame with the pandas.read_csv() command.

    Data Import: Setting up a MongoDB (Recommended)

    To take full advantage of the LifeSnaps dataset, we recommend that you use the raw, complete data via importing the LifeSnaps MongoDB database.

    To do so, open the terminal/command prompt and run the following command for each collection in the DB. Ensure you have MongoDB Database Tools installed from here.

    For the Fitbit data, run the following:

    mongorestore --host localhost:27017 -d rais_anonymized -c fitbit 

    For the SEMA data, run the following:

    mongorestore --host localhost:27017 -d rais_anonymized -c sema 

    For surveys data, run the following:

    mongorestore --host localhost:27017 -d rais_anonymized -c surveys 

    If you have access control enabled, then you will need to add the --username and --password parameters to the above commands.

    Data Availability

    The MongoDB database contains three collections, fitbit, sema, and surveys, containing the Fitbit, SEMA3, and survey data, respectively. Similarly, the CSV files contain related information to these collections. Each document in any collection follows the format shown below:

    {
      _id: 
  7. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Wilkinson, Toby C (2020). Project Panormos Archaeological Survey: Archaeological Data (survey-data) [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_1039979

Project Panormos Archaeological Survey: Archaeological Data (survey-data)

Explore at:
Dataset updated
Jul 17, 2020
Dataset provided by
Wilkinson, Toby C
Slawisch, Anja
Strupler, Néhémie
Description

This forms part of the preliminary open data release for the Project Panormos archaeological survey.

This "panormos/survey-data" repository contains archaeological data collected as part of the Project Panormos intensive archaeological survey, including tract data (pottery counts, environment, etc.), POI (points-of-interest), finds descriptions and contexts, and photo metadata. Scripts to correct raw imported data files (e.g. from ODK software) into the "canonical" versions are also made available here.

Release 0.2.0 includes data from the 2015, 2017 and 2019 seasons. It is a pre-publication or "observation" version. No derivative works should be made until the expiry of the observation phase: please see enclosed LICENSE file for details.

Release 0.1.0 includes data from the 2015 season. It is a pre-publication or "observation" version. No derivative works should be made until the expiry of the observation phase: please see enclosed LICENSE file for details.

Releases below 1.0.0 represent preprint working datasets before final publication. Although every effort has been made to reduce errors and make the datasets available in a form that should be easy to navigate, the status of the data as a form of "beta" should be borne in mind.

Search
Clear search
Close search
Google apps
Main menu