4 datasets found
  1. f

    Datasets of "Neurosurgery and Artificial Intelligence: A Metric Analysis of...

    • figshare.com
    jar
    Updated Apr 3, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hector Julio Piñera-Castro; Christian Borges-García (2025). Datasets of "Neurosurgery and Artificial Intelligence: A Metric Analysis of Scopus-Indexed Original Articles (2014-2023)" [Dataset]. http://doi.org/10.6084/m9.figshare.28726415.v1
    Explore at:
    jarAvailable download formats
    Dataset updated
    Apr 3, 2025
    Dataset provided by
    figshare
    Authors
    Hector Julio Piñera-Castro; Christian Borges-García
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Introduction: A comprehensive analysis of artificial intelligence's (AI) integration into neurosurgery is vital to identify research priorities, address gaps, and inform strategies for equitable innovation. Objective: To conduct a bibliometric analysis of Scopus-indexed (2014-2023) original articles at the intersection of AI and neurosurgery. Method: A descriptive metric study was conducted on 91 original articles, employing productivity, impact, and collaboration indicators. SciVal facilitated data extraction, while VOSviewer 1.6.11 enabled the mapping of co-authorship networks and keyword co-occurrence. IBM SPSS Statistics 27 was used to determine correlations between variables of interest (Kendall’s rank correlation coefficient, statistically significant for p < 0.05). Results: The 91 articles accumulated 2,197 citations (24.1/article), reflecting rising productivity. Most highly cited works (2019–2023) were published in Q1 journals. Dominant neurosurgical areas included neuro-oncology (25.4%) and education (20.9%), with AI applications focused on diagnostic accuracy (20.9%) and predictive tools (17.6%). Citations correlated with author numbers (p = 0.007). World Neurosurgery led in publications (Ndoc = 11), while JAMA Network Open had the highest citations/article (88.7). Author, institutional, and country productivity correlated strongly with citations (p < 0.001). Collaboration was universal (international: 29.7%, national: 53.8%, institutional: 16.5%). Conclusions: The analyzed scientific output exhibited a marked quantitative growth trend and high citation rates, with a predominant focus on leveraging AI to enhance diagnostic accuracy, particularly in neuro-oncology. Publications were concentrated in specialized, high-impact journals and predominantly originated from authors and institutions in high-income, technologically advanced Northern Hemisphere countries, where scientific collaboration played a foundational role in driving research advancements.

  2. f

    A AI and Eye Tracking Reveal Design Elements’ Impact on E-Magazine Reader...

    • figshare.com
    csv
    Updated Feb 2, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dr. Hedda Martina Šola; Dr. Fayyaz Hussain Qureshi; Sarwar Khawaja (2025). A AI and Eye Tracking Reveal Design Elements’ Impact on E-Magazine Reader Engagement [Dataset]. http://doi.org/10.6084/m9.figshare.27880221.v1
    Explore at:
    csvAvailable download formats
    Dataset updated
    Feb 2, 2025
    Dataset provided by
    figshare
    Authors
    Dr. Hedda Martina Šola; Dr. Fayyaz Hussain Qureshi; Sarwar Khawaja
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This data are part of the published research in a top-tier Education Sciences journal (Q1), entitled: "A AI and Eye Tracking Reveal Design Elements’ Impact on E-Magazine Reader Engagement". This research has been conducted by combining different technologies: webcam eye tracking tested on Oxford Business College (n=144) and AI Eye tracking/EEG neuromarketing software which predicts human behaviour (n=180.000).

  3. f

    Table_1_Evolution of research trends in artificial intelligence for breast...

    • frontiersin.figshare.com
    • figshare.com
    bin
    Updated Jun 16, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Asif Hassan Syed; Tabrej Khan (2023). Table_1_Evolution of research trends in artificial intelligence for breast cancer diagnosis and prognosis over the past two decades: A bibliometric analysis.docx [Dataset]. http://doi.org/10.3389/fonc.2022.854927.s010
    Explore at:
    binAvailable download formats
    Dataset updated
    Jun 16, 2023
    Dataset provided by
    Frontiers
    Authors
    Asif Hassan Syed; Tabrej Khan
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    ObjectiveIn recent years, among the available tools, the concurrent application of Artificial Intelligence (AI) has improved the diagnostic performance of breast cancer screening. In this context, the present study intends to provide a comprehensive overview of the evolution of AI for breast cancer diagnosis and prognosis research using bibliometric analysis.MethodologyTherefore, in the present study, relevant peer-reviewed research articles published from 2000 to 2021 were downloaded from the Scopus and Web of Science (WOS) databases and later quantitatively analyzed and visualized using Bibliometrix (R package). Finally, open challenges areas were identified for future research work.ResultsThe present study revealed that the number of literature studies published in AI for breast cancer detection and survival prediction has increased from 12 to 546 between the years 2000 to 2021. The United States of America (USA), the Republic of China, and India are the most productive publication-wise in this field. Furthermore, the USA leads in terms of the total citations; however, hungry and Holland take the lead positions in average citations per year. Wang J is the most productive author, and Zhan J is the most relevant author in this field. Stanford University in the USA is the most relevant affiliation by the number of published articles. The top 10 most relevant sources are Q1 journals with PLOS ONE and computer in Biology and Medicine are the leading journals in this field. The most trending topics related to our study, transfer learning and deep learning, were identified.ConclusionThe present findings provide insight and research directions for policymakers and academic researchers for future collaboration and research in AI for breast cancer patients.

  4. r

    Eye-tracking data of the 2017 Aesthetic value project (NESP 3.2.3, Griffith...

    • researchdata.edu.au
    bin
    Updated 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Becken, Susanne, Professor; Connolly, Rod, Professor; Stantic, Bela, Professor; Scott, Noel, Professor; Mandal, Ranju, Dr; Le, Dung (2019). Eye-tracking data of the 2017 Aesthetic value project (NESP 3.2.3, Griffith Institute for Tourism Research) [Dataset]. https://researchdata.edu.au/eye-tracking-2017-tourism-research/1440087
    Explore at:
    binAvailable download formats
    Dataset updated
    2019
    Dataset provided by
    eAtlas
    Authors
    Becken, Susanne, Professor; Connolly, Rod, Professor; Stantic, Bela, Professor; Scott, Noel, Professor; Mandal, Ranju, Dr; Le, Dung
    License

    Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
    License information was derived automatically

    Time period covered
    Jan 28, 2017 - Jan 28, 2018
    Description

    This dataset consists of three data folders for the eye-tracking experiment conducted within the NESP 3.2.3 project (Tropical Water Quality Hub): Folder (1) The folder of Eye-tracking videos contains 66 Tobii recordings of participants’ eye movements on screen, Folder (2) The Heatmaps folder includes 21 heatmaps created by Tobii eye-tracking software on the basis of 66 participants’ data and Folder (3) The input folder has 21 original pictures used in eye-tracking experiment. Moreover, The dataset also includes 1 excel file representing eye-tracking data extracted from Tobii software and participant interview results, 1 SPV. file as the input of SPSS data analysis process and 1 SPV. file as the output of data analysis process.

    Methods: This dataset resulted from both input and output data of eye-tracking experiments. The input includes 21 underwater pictures of the Great Barrier Reef, selected from online searching with the keyword “Great Barrier Reef”. These pictures are imported to Tobii eye-tracking software to design the eye-tracking experiments. 66 participants were recruited using convenience sampling in this study. They were asked to sit in front of a screen-based eye-tracking equipment (i.e. Tobii T60 eye-tracker) after providing informed consent. Participants were free to look at each picture on screen as long as they wanted during which their eye movements were recorded. They also rated each picture on a 10-point beauty scale (1-Not beautiful at all, 10-Very beautiful) and a 10-point expectation scale (1-Not at all, 10-Very much). After the experiment, 40 subjects were also interviewed to identify the areas of interest (AOI) in each picture and to rate the beauty of these AOIs. Eye-tracking data was then extracted from Tobii eye-tracking device including participants’ eye-tracking recordings, heatmaps (i.e. images showing viewers’ attention focus) and raw eye-tracking measures (i.e. picture beauty, time to first fixation, fixation count, fixation duration and total visit time) using XLSX. download format. Raw eye-tracking data was then imported to IBM SPSS using SAV. format for data analysis which results in a SPV. output file.

    Further information can be found in the following publication: Scott, N., Le, D, Becken, S., and Connelly, R. (2018 Submitted) Measuring perceived beauty of the Great Barrier Reef using eye tracking. Journal of Sustainable Tourism. Becken, S., Connolly R., Stantic B., Scott N., Mandal R., Le D., (2018), Monitoring aesthetic value of the Great Barrier Reef by using innovative technologies and artificial intelligence, Griffith Institute for Tourism Research Report No 15.

    Format: The project dataset includes 132 eye-tracking videos of AVI. format, 21 heatmaps of PNG. format, 21 pictures of JPEG. format, 1 XLSX. format document representing raw eye-tracking measures and interview data, 1 SAV. format document as the input of data analysis and 1 SPV. format file showing data analysis results.

    Data Dictionary:

    Q1, Q2, Q3, Q4, Q5, Q6, Q7, Q8, Q9, Q10: Names of pictures used in the eye-tracking experiment 2. 3Q1, 3Q2, 3Q3, 3Q4, 3Q5, 3Q6, 3Q7, 3Q8, 3Q9, 3Q10, 3Q11: Names of pictures used in the eye-tracking experiment 3.

    Raw Eye tracking Measurements excel spreadsheets:

    Tab - Picture: INDEX: the 10-point scale showed to participants VALUE: meaning of the 10-point scale Q1.1: Beauty score Q1.2: Expectation score

    Tab - Area of Interest (AOI)" TIME TO FIRST FIXATION_Q1: Time to first fixation in the picture Q1 (i.e. i.e. the average time from the beginning of the recording until the respective picture was first fixated upon) TOTAL FIXATION DURATION_Q1: Fixation duration in the picture Q1 (i.e. the average length of all fixations during all recordings in the whole picture). A longer fixation means that the object is more engaging in some way. FIXATION COUNT_Q1: Fixation count in the picture Q1 (i.e. the average number of fixations in the picture). TOTAL VISIT DURATION_Q1: Total time visit for the picture Q1 (i.e. the average time participants spent looking at a picture). TIME TO FIRST FIXATION_AOI1: Time to first fixation in the AOI identified in the picture Q1 (i.e. i.e. the average time from the beginning of the recording until the respective picture was first fixated upon) TOTAL FIXATION DURATION_AOI1: Fixation duration in the AOI identified in the picture Q1 (i.e. the average length of all fixations during all recordings in the whole picture). A longer fixation means that the object is more engaging in some way. FIXATION COUNT_AOI1: Fixation count in the AOI identified in the picture Q1 (i.e. the average number of fixations in the picture). TOTAL VISIT DURATION_AOI1: Total time visit for the AOI identified in the picture Q1 (i.e. the average time participants spent looking at a picture).

    Tab - AOI interview: AOI IDENTIFIED: The AOI that is the most mentioned by participants NUMBER OF PARTICIPANTS: the number of participants who mentioned the AOI in the previous column. BEAUTY MEAN: The average beauty score of the correspondent AOI rated by 40 participants. AOI-1: The AOI identified by the correspondent participant. RATING: the beauty score associated to the AOI identified by the correspondent participant.

    Tab - Analysis: REC: Recording PICTURE: Picture number BEAUTY: The average beauty score of the correspondent picture by 66 participants EXPECTATION: The average expectation score of the correspondent picture by 66 participants AOI BEAUTY: The average beauty score of the AOI identified in the correspondent picture by interviewed participants. PICTURE 1st TIME: The average time to first fixation in the correspondent picture (i.e. i.e. the average time from the beginning of the recording until the respective picture was first fixated upon) by 66 participants PFDURATION: The average fixation duration in the correspondent picture (i.e. the average length of all fixations during all recordings in the whole picture) by 66 participants PFCOUNT: The average fixation count in the correspondent picture (i.e. the average number of fixations in the picture) by 66 participants PTING VISIT: The average of total time visit for the correspondent picture (i.e. the average time participants spent looking at a picture) by 66 participants AOI 1stTIME: The average time to first fixation in the AOI identified in the correspondent picture (i.e. i.e. the average time from the beginning of the recording until the respective picture was first fixated upon) by 66 participants AOIFDURATION: The average fixation duration in the AOI identified in the correspondent picture (i.e. the average length of all fixations during all recordings in the whole picture) by 66 participants AOIFCOUNT: The average fixation count in the the AOI identified in correspondent picture (i.e. the average number of fixations in the picture) by 66 participants AOITIMEVISIT: The average of total time visit for the AOI identified in the correspondent picture (i.e. the average time participants spent looking at a picture) by 66 participants

    References:

    Scott, N., Le, D, Becken, S., and Connelly, R. (2018 Submitted) Measuring perceived beauty of the Great Barrier Reef using eye tracking. Journal of Sustainable Tourism.

    Becken, S., Connolly R., Stantic B., Scott N., Mandal R., Le D., (2018), Monitoring aesthetic value of the Great Barrier Reef by using innovative technologies and artificial intelligence, Griffith Institute for Tourism Research Report No 15.

    Data Location:

    This dataset is filed in the eAtlas enduring data repository at: data esp3\3.2.3_Aesthetic-value-GBR

  5. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Hector Julio Piñera-Castro; Christian Borges-García (2025). Datasets of "Neurosurgery and Artificial Intelligence: A Metric Analysis of Scopus-Indexed Original Articles (2014-2023)" [Dataset]. http://doi.org/10.6084/m9.figshare.28726415.v1

Datasets of "Neurosurgery and Artificial Intelligence: A Metric Analysis of Scopus-Indexed Original Articles (2014-2023)"

Explore at:
jarAvailable download formats
Dataset updated
Apr 3, 2025
Dataset provided by
figshare
Authors
Hector Julio Piñera-Castro; Christian Borges-García
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

Introduction: A comprehensive analysis of artificial intelligence's (AI) integration into neurosurgery is vital to identify research priorities, address gaps, and inform strategies for equitable innovation. Objective: To conduct a bibliometric analysis of Scopus-indexed (2014-2023) original articles at the intersection of AI and neurosurgery. Method: A descriptive metric study was conducted on 91 original articles, employing productivity, impact, and collaboration indicators. SciVal facilitated data extraction, while VOSviewer 1.6.11 enabled the mapping of co-authorship networks and keyword co-occurrence. IBM SPSS Statistics 27 was used to determine correlations between variables of interest (Kendall’s rank correlation coefficient, statistically significant for p < 0.05). Results: The 91 articles accumulated 2,197 citations (24.1/article), reflecting rising productivity. Most highly cited works (2019–2023) were published in Q1 journals. Dominant neurosurgical areas included neuro-oncology (25.4%) and education (20.9%), with AI applications focused on diagnostic accuracy (20.9%) and predictive tools (17.6%). Citations correlated with author numbers (p = 0.007). World Neurosurgery led in publications (Ndoc = 11), while JAMA Network Open had the highest citations/article (88.7). Author, institutional, and country productivity correlated strongly with citations (p < 0.001). Collaboration was universal (international: 29.7%, national: 53.8%, institutional: 16.5%). Conclusions: The analyzed scientific output exhibited a marked quantitative growth trend and high citation rates, with a predominant focus on leveraging AI to enhance diagnostic accuracy, particularly in neuro-oncology. Publications were concentrated in specialized, high-impact journals and predominantly originated from authors and institutions in high-income, technologically advanced Northern Hemisphere countries, where scientific collaboration played a foundational role in driving research advancements.

Search
Clear search
Close search
Google apps
Main menu