11 datasets found
  1. 🇺🇸 Charlie Kirk(†) Twitter/ 𝕏 Dataset

    • kaggle.com
    Updated Sep 28, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    BwandoWando (2025). 🇺🇸 Charlie Kirk(†) Twitter/ 𝕏 Dataset [Dataset]. http://doi.org/10.34740/kaggle/ds/8259158
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Sep 28, 2025
    Dataset provided by
    Kaggle
    Authors
    BwandoWando
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    Who is Charlie Kirk?

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1842206%2F9ff49a3bb052e339eb85a66dca611f6c%2Fcharlie-kirk-turning-point2-91025-91025-a19b6183557949938f0dc01df2c33a28.jpg?generation=1757731111497297&alt=media" alt="">

    Charles James Kirk (October 14, 1993 – September 10, 2025) was an American conservative political activist, author, and media personality. He co-founded the organization Turning Point USA (TPUSA) in 2012 and was its executive director. He was the chief executive officer of Turning Point Action (TPAction) and a member of the Council for National Policy (CNP). In his later years, he was one of the most prominent voices of the populist MAGA movement and exemplified the growth of Christian nationalism in the Republican Party.

    From: https://en.wikipedia.org/wiki/Charlie_Kirk

    CBS News' "Who was Charlie Kirk?"

    https://www.youtube.com/watch?v=0xngCgJnO5E" alt="">

    Death

    On September 10, 2025, while on stage at Utah Valley University in Orem, Utah, for a TPUSA event, "The American Comeback Tour", Kirk was fatally shot in the neck. The shooting took place at 12:23 p.m. MDT (18:23 UTC), around 20 minutes after the event began, in front of an audience of about 3,000 people.

    From: https://en.wikipedia.org/wiki/Charlie_Kirk

    Coverage of this Dataset

    • I queried tweets with either #CharlieKirk or "Charlie Kirk" in them within the last 36 hours.

    Important Note

    • All tagged usernames (ex: @username) and forms of Ids are obfuscated and replaced with a unique hashid value based on original value retaining data integrity
    • Tagged usernames that have been banned, suspended, or deleted from the platform are still obfuscated

    "Well-known" authors

    I added a file to denote users who have posted tweets about the topic that have either characteristic(s) - Blue-certified accounts with at least 10K followers - Non-Blue-certified accounts with at least 50K followers

    This is to help map back and include additional context on who these users that are being tagged or are creating the tweets

    Source

    I signed up for a trial with https://twitterapi.io/ , check it out!

    Image

    Credit : OLIVIER TOURON/ AFP via Getty

  2. Data from: HRI-SENSE: A Multimodal Dataset on Social and Emotional Responses...

    • zenodo.org
    zip
    Updated Mar 10, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Balint Gucsi; Balint Gucsi; Tuyen Nguyen Tan Viet; Tuyen Nguyen Tan Viet; Bing Chu; Bing Chu; Danesh Tarapore; Danesh Tarapore; Long Tran-Thanh; Long Tran-Thanh (2025). HRI-SENSE: A Multimodal Dataset on Social and Emotional Responses to Robot Behaviour [Dataset]. http://doi.org/10.5281/zenodo.14267885
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 10, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Balint Gucsi; Balint Gucsi; Tuyen Nguyen Tan Viet; Tuyen Nguyen Tan Viet; Bing Chu; Bing Chu; Danesh Tarapore; Danesh Tarapore; Long Tran-Thanh; Long Tran-Thanh
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description
    This is the dataset for the paper "HRI-SENSE: A Multimodal Dataset on Social and Emotional Responses to Robot Behaviour" – available at the Proceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction.
    The dataset captures various modalities of user behaviour (facial landmarks, facial action units, head pose, body pose landmarks, depth footage) exhibited by participants interacting with a TIAGo Steel robot following different behaviour models on a collaborative manipulation "Burger Assembly" task. Details of the task can be found in the paper. The non-verbal modalities are complemented by explicit feedback signals (verbal dialogue transcripts), robot joint movements data, interaction event labels and self-assessed questionnaires (pre-study and post-interaction questionnaires) on sociodemographics and perceived user impressions (e.g. frustration, satisfaction).
    The dataset's contents have been collected from over 6 hours of verbal and physical human-robot interactions in over 146 sessions with 18 participants.

    Data Modalities

    User facial expression data (facial landmarks, facial action units and head pose) have been calculated by OpenFace:
    • head location: pose_Tx, pose_Ty, pose_Tz
    • head rotation: pose_Rx, pose_Ry, pose_Rz
    • facial landmark 3d locations: X_0, ... X_67, Y_0,...Y_67, Z_0,...Z_67
    • facial action units intensity (r) and presence (c): AU01_r, AU02_r,...AU25_r, AU26_r, AU45_r, AU01_c, AU02_c,...AU28_c, AU45_c
    User pose landmarks have been calculated by MediaPipe Pose Landmarker:
    • 3d coordinates of 33 pose landmarks
    Verbal dialogue transcripts have been produced by OpenAI's Whisper model.
    Depth information and robot joint data has been recorded by the TIAGo robot's sensors.
    Self-assessed questionnaire data has been collected using a 5-point Likert scale, following question items and practices established in previous Human-Robot Interactions and Psychology works. Details of this can be found in the paper.
    For more details on the data collection and processing pipeline, please see the paper.

    Contents

    The dataset is organised as:
    • README.md
    • questionnaire-data/
      • pre-study-questionnaire/
        • pre-study-questionnaire.pdf
        • pre-study-questionnaire-responses.csv: participant-id,A1-age,A2-occupation,B1,B2,B3,B4,B5,B6,C1,C2,C3,C4,C5,C6,C7,C8,C9,C10,C11,C12
      • post-interaction-questionnaire/
        • post-interaction-questionnaire.pdf
        • post-interaction-questionnaire-responses.csv: participant-id,model-id,Q1,Q2,Q3,Q4,Q5,Q6,Q7,Q8,Q9,Q10,Q11,Q12,Q13,Q14,Q15,Q16,Q17
    • sensory-data/
      • depth-data/
        • P[participant_id]-M[model_id]-[trial_number].mp4
      • user-face-data/
        • P[participant_id]-M[model_id]-[trial_number].csv: timestamp,confidence,success,pose_Tx,pose_Ty,pose_Tz,pose_Rx,pose_Ry,pose_Rz,X_0,X_1,X_2,X_3,X_4,X_5,X_6,X_7,X_8,X_9,X_10,X_11,X_12,X_13,X_14,X_15,X_16,X_17,X_18,X_19,X_20,X_21,X_22,X_23,X_24,X_25,X_26,X_27,X_28,X_29,X_30,X_31,X_32,X_33,X_34,X_35,X_36,X_37,X_38,X_39,X_40,X_41,X_42,X_43,X_44,X_45,X_46,X_47,X_48,X_49,X_50,X_51,X_52,X_53,X_54,X_55,X_56,X_57,X_58,X_59,X_60,X_61,X_62,X_63,X_64,X_65,X_66,X_67,Y_0,Y_1,Y_2,Y_3,Y_4,Y_5,Y_6,Y_7,Y_8,Y_9,Y_10,Y_11,Y_12,Y_13,Y_14,Y_15,Y_16,Y_17,Y_18,Y_19,Y_20,Y_21,Y_22,Y_23,Y_24,Y_25,Y_26,Y_27,Y_28,Y_29,Y_30,Y_31,Y_32,Y_33,Y_34,Y_35,Y_36,Y_37,Y_38,Y_39,Y_40,Y_41,Y_42,Y_43,Y_44,Y_45,Y_46,Y_47,Y_48,Y_49,Y_50,Y_51,Y_52,Y_53,Y_54,Y_55,Y_56,Y_57,Y_58,Y_59,Y_60,Y_61,Y_62,Y_63,Y_64,Y_65,Y_66,Y_67,Z_0,Z_1,Z_2,Z_3,Z_4,Z_5,Z_6,Z_7,Z_8,Z_9,Z_10,Z_11,Z_12,Z_13,Z_14,Z_15,Z_16,Z_17,Z_18,Z_19,Z_20,Z_21,Z_22,Z_23,Z_24,Z_25,Z_26,Z_27,Z_28,Z_29,Z_30,Z_31,Z_32,Z_33,Z_34,Z_35,Z_36,Z_37,Z_38,Z_39,Z_40,Z_41,Z_42,Z_43,Z_44,Z_45,Z_46,Z_47,Z_48,Z_49,Z_50,Z_51,Z_52,Z_53,Z_54,Z_55,Z_56,Z_57,Z_58,Z_59,Z_60,Z_61,Z_62,Z_63,Z_64,Z_65,Z_66,Z_67,AU01_r,AU02_r,AU04_r,AU05_r,AU06_r,AU07_r,AU09_r,AU10_r,AU12_r,AU14_r,AU15_r,AU17_r,AU20_r,AU23_r,AU25_r,AU26_r,AU45_r,AU01_c,AU02_c,AU04_c,AU05_c,AU06_c,AU07_c,AU09_c,AU10_c,AU12_c,AU14_c,AU15_c,AU17_c,AU20_c,AU23_c,AU25_c,AU26_c,AU28_c,AU45_c
      • user-pose-data/
        • P[participant_id]-M[model_id]-[trial_number]-[camera_id].csv: time,L0-x,L0-y,L0-z,L1-x,L1-y,L1-z,L2-x,L2-y,L2-z,L3-x,L3-y,L3-z,L4-x,L4-y,L4-z,L5-x,L5-y,L5-z,L6-x,L6-y,L6-z,L7-x,L7-y,L7-z,L8-x,L8-y,L8-z,L9-x,L9-y,L9-z,L10-x,L10-y,L10-z,L11-x,L11-y,L11-z,L12-x,L12-y,L12-z,L13-x,L13-y,L13-z,L14-x,L14-y,L14-z,L15-x,L15-y,L15-z,L16-x,L16-y,L16-z,L17-x,L17-y,L17-z,L18-x,L18-y,L18-z,L19-x,L19-y,L19-z,L20-x,L20-y,L20-z,L21-x,L21-y,L21-z,L22-x,L22-y,L22-z,L23-x,L23-y,L23-z,L24-x,L24-y,L24-z,L25-x,L25-y,L25-z,L26-x,L26-y,L26-z,L27-x,L27-y,L27-z,L28-x,L28-y,L28-z,L29-x,L29-y,L29-z,L30-x,L30-y,L30-z,L31-x,L31-y,L31-z,L32-x,L32-y,L32-z
      • robot-joint-data/
        • P[participant_id]-M[model_id]-[trial_number].csv: time, arm_1_joint, arm_2_joint, arm_3_joint, arm_4_joint, arm_5_joint, arm_6_joint, arm_7_joint
      • dialogue-transcript/
        • P[participant_id]-M[model_id]-[trial_number].csv: start,end,text
    • interaction-event-labels/
      • P[participant_id]-M[model_id]-[trial_number].csv: event,start,end

    Limitations

    Due to recording sensor malfunctions and processing library limitations, not all interaction scenario data contains all modalities. Verbal dialogue transcripts or user pose data may be incomplete or missing for a small number of interactions or recording angles.

    Acknowledgements

    This work was supported by UK Research and Innovation [EP/S024298/1].

  3. Z

    Spiking Seizure Classification Dataset

    • data.niaid.nih.gov
    Updated Jan 13, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Gallou, Olympia (2025). Spiking Seizure Classification Dataset [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_10800793
    Explore at:
    Dataset updated
    Jan 13, 2025
    Dataset provided by
    Matthew, Cook
    GHOSH, SAPTARSHI
    Bartels, Jim
    Gallou, Olympia
    Ito, Hiroyuki
    Indiveri, Giacomo
    Sarnthein, Johannes
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    Dataset for event encoded analog EEG signals for detection of Epileptic seizures

    This dataset contains events that are encoded from the analog signals recorded during pre-surgical evaluations of patients at the Sleep-Wake-Epilepsy-Center (SWEC) of the University Department of Neurology at the Inselspital Bern. The analog signals are sourced from the SWEC-ETHZ iEEG Database

    This database contains event streams for 10 seizures recorded from 5 patients and generated by the DYnamic Neuromorphic Asynchronous Processor (DYNAP-SE2) to demonstrate a proof-of-concept of encoding seizures with network synchronization. The pipeline consists of two parts (I) an Analog Front End (AFE) and (II) an SNN termed as"Non-Local Non-Global" (NLNG) network.

    In the first part of the pipeline, the digitally recorded signals from SWEC-ETHZ iEEG Database are converted to analog signals via an 18-bit Digital-to-Analog converter (DAC) and then amplified and encoded into events by an Asynchronous Delta Modulator (ADM). Then in the second part, the encoded event streams are fed into the SNN that extracts the features of the epileptic seizure by extracting the partial synchronous patterns intrinsic to the seizure dynamics.

    Details about the neuromorphic processing pipeline and the encoding process are included in a manuscript under review. The preprint is available in bioRxiv

    InstallationThe installation requires Python>=3.x and conda (or py-venv) package. Users can then install the requirements inside a conda environment using

    conda env create -f requirements.txt -n sez

    Once created the conda environment can be activated with conda activate sez

    The main files in the database are described in the hierarchy below.

    EventSezDataset/

    ├─ data/

    │ ├─ P x S x

    │ │ ├─ Pat x Sz x _CH x .csv

    ├─ LSVM_Params/

    │ ├─ opt_svm_params/

    │ ├─ pat_x_features_SYNCH/

    ├─ fig_gen.py

    ├─ sync_mat_gen.py

    ├─ SeizDetection_FR.py

    ├─ SeizDetection_SYNCH.py

    ├─ support.py

    ├─ run.sh

    ├─ requirements.txt

    where x represents the Patient ID and the Seizure ID respectively.

    requirements.txt: This file lists the requirements for the execution of the Python code.

    fig_gen.py: This file plots the analog signals and the associated AFE and NLNG event streams. The execution of the code happens with `python fig_gen.py 1 1 13', where patient 2, seizure 1, and channel 13 of the recording are plotted.

    sync_mat_gen.py: This file describes the function for plotting the synchronization matrices emerging from the ADM and the NLNG spikes with either linear or log colorbar. The execution of the code happens with python sync_mat_gen.py 1 1' orpython sync_mat_gen.py 1 1 log'. This execution generated four figures for pre-seizure, First Half of seizure, Second Half of seizure, and post-seizure time periods, where patient 1 and seizure 1. The third option can either be left blank or input as lin or log, for respective color bar scales. The time is the signal-time as mentioned in the table below.

    run.sh: A simple Linux script to run the above code for all patients and seizures.

    SeizDetection_FR.py: This file runs the LSVM on the ADM and NLNG spikes, using the firing rate (FR) as a feature. The code is currently set up with plotting with pre-computed features (in the LSVM_Params/opt_svm_params/ folder). Users can use the code for training the LSVM with different parameters as well.

    SeizDetection_SYNCH.py: This file runs the LSVM on the kernelized ADM and NLNG spikes, using the flattened SYNC matrices as a feature. The code is currently set up with plotting with pre-computed features (in the LSVM_Params/pat_x_features_SYNCH/ folder). Users can use the code for training the LSVM with different parameters as well.

    LSVM_Params: Folder containing LSVM features with different parameter combinations.

    support.py: This file contains the necessary functions.

    data/P1S1/: This folder, for example, contains the event streams for all channels for seizure 1 of patient 1.

    Pat1_Sz_1_CH1.csv: This file contains the spikes of the AFE and the NLNG layers with the following tabular format (which can be extracted by the fig_gen.py)

    Comments

    SStart: 180 //Start of the Seizure in signal time# SEnd: 276.0 //Start of the Seizure in signal time# Pid: 2 // The patient ID as per the SWEC-ETHZ iEEG Database # Sid: 1 // The Seizure ID as per the SWEC-ETHZ iEEG Database # Channel_No: 1 // The channel number

    SYS_time signal_time dac_value ADMspikes NLNGspikes

    The time from the interface FPGA The time of the signal as per the SWEC ETHZ Database The value of the analog signals as recorded in the SWEC ETHZ Database The event-steam is the output of the AFE in boolean format. True represents a spike The spike-steam is the output of the SNN in boolean format. True represents a spike

  4. m

    NUIG_EyeGaze01(Labelled eye gaze dataset)

    • data.mendeley.com
    Updated Feb 27, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anuradha Kar (2019). NUIG_EyeGaze01(Labelled eye gaze dataset) [Dataset]. http://doi.org/10.17632/cfm4d9y7bh.1
    Explore at:
    Dataset updated
    Feb 27, 2019
    Authors
    Anuradha Kar
    License

    Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
    License information was derived automatically

    Description

    The NUIG_EyeGaze01(Labelled eye gaze dataset) is a rich and diverse gaze dataset, built using eye gaze data from experiments done under a wide range of operating conditions from three user platforms (desktop, laptop, tablet) . Gaze data is collected under one condition at a time.

    The dataset includes gaze (fixation) data collected under 17 different head poses, 4 user distances, 6 platform poses and 3 display screen size and resolutions. Each gaze data file is labelled with the operating condition under which it was collected and has the name format: USERNUMBER_CONDITION_PLATFORM.CSV

    CONDITION: RP- Roll plus in degree PP- Pitch plus in degree YP- Yaw plus in degree

    RM- Roll minus in degree PM-Pitch minus in degree YM- Yaw minus in degree

    50, 60, 70, 80: User distances

    PLATFORM: desk- Desktop, lap- Laptop, tab- Tablet

    Desktop display: 22 inch, 1680 x1050 pixels Laptop display: 14 inch, 1366x 768 pixels Tablet display: 10.1 inch 1920 x 800, pixels

    Eye tracker accuracy: 0.5 degrees (for neutral head and tracker position)

    The dataset has 3 folders called “Desktop”, “Laptop”, “Tablet” containing gaze data from respective platforms. The Desktop folder has 2 sub-folders: user_distance and head_pose. These have data for different user distances and head poses (neutral, roll, pitch, yaw )measured with desktop setup. The Tablet folder has 2 sub-folders: user_distance and tablet_pose,. These have data for different user distances and tablet+tracker poses (neutral, roll, pitch, yaw) measured with tablet setup . The Laptop folder has one sub-folder called user_distance which has data for different user distances, measured with laptop setup.

    All data files are in CSV format. Each file contains the following data header fields:

    ("TIM REL","GTX", "GTY","XRAW", "YRAW","GT Xmm", "GT Ymm","Xmm", "Ymm","YAW GT", "YAW DATA","PITCH GT", "PITCH DATA","GAZE GT","GAZE ANG", "DIFF GZ", "AOI_IND","AOI_X","AOI_Y","MEAN_ERR","STD ERR")

    The meanings of the header fields are as follows:

    TIM REL: relative time stamp for each gaze data point (measured during data collection) "GTX", "GTY": Ground truth x, y positions in pixels "XRAW", "YRAW": Raw gaze data x, y coordinates in pixels "GT Xmm", "GT Ymm": Ground truth x, y positions in mm "Xmm", "Ymm": Gaze x, y positions in mm "YAW GT", "YAW DATA": Ground truth and estimated yaw angles "PITCH GT", "PITCH DATA": Ground truth and estimated pitch angles "GAZE GT","GAZE ANG": Ground truth and estimated gaze angles "DIFF GZ": Gaze angular accuracy "AOI_IND","AOI_X","AOI_Y": Index of the stimuli locations and their x, y coordinates "MEAN_ERR","STD ERR": Mean and standard deviation of error at the stimuli locations

    For more details on the purpose of this dataset and data collection method, please consult the paper by authors of this dataset :

    Anuradha Kar, Peter Corcoran: Performance Evaluation Strategies for Eye Gaze Estimation Systems with Quantitative Metrics and Visualizations. Sensors 18(9): 3151 (2018)

  5. Ocular Toxoplasmosis Fundus Images Dataset

    • kaggle.com
    Updated Sep 27, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    TensorKitty (2023). Ocular Toxoplasmosis Fundus Images Dataset [Dataset]. https://www.kaggle.com/datasets/nafin59/ocular-toxoplasmosis-fundus-images-dataset/suggestions?status=pending
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Sep 27, 2023
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    TensorKitty
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Toxoplasmosis chorioretinitis is commonly diagnosed by an ophthalmologist through the evaluation of the fundus images of a patient. Early detection of these lesions may help to prevent blindness. In this article we present a data set of fundus images labeled into three categories: healthy eye, inactive and active chorioretinitis. The dataset was developed by three ophthalmologists with expertise in toxoplasmosis detection using fundus images. The dataset will be of great use to researchers working on ophthalmic image analysis using artificial intelligence techniques for the automatic detection of toxoplasmosis chorioretinitis.

    Data Description

    Raw Images

    The folder named "Data_Raw_6class_All" contains the original fundus images captured at 2 hospital centers: 1. Hospital de ClĂ­nicas Medical Center: The period of time consumed to collect the fundus images was from 2018 to 2020. The dataset consists of 291 fundus images with a size of 2124 x 2056 pixels in JPG format.

    2.Niños de Acosta Ñú General Pediatric Hospital: Images were acquired in children under 18 years of age. The images were captured in the year 2021. The dataset consists of 121 fundus images with a size of 1536 x 1152 pixels in JPG format.

    The images correspond to patients with suspected congenital toxoplasmosis infection. The fundus images are classified according to the following categories: Healthy and Diseased. Diseased in turn, can be classified into: i) Inactive only, ii) Active only and iii) Active/Inactive. The number of data in each class is shown in the following table:

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4418512%2F72f950749f1b1f0be296e476a6a9752c%2Ftable.PNG?generation=1695835869882569&alt=media" alt="">

    Processed Images

    The folder named "Data_Processed_Paper" contains the processed images divided into only Healthy and Diseased Classes so that a binary classification can be performed. The "Diseased" class is formed by combining the 5 sub-divisions under it. After going through literature and rigorous benchmarking experimentation, Alam et al. has established a preprocessing pipeline that yields the best classification results. The preprocessing includes denoising/smoothening, normalization, contrast enhancement, illumination equalization, color space transformation etc. Implementation details can be found in the paper. The following image represents the preprocessing pipeline.

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4418512%2F60cbb0f24a4df7a94a0ef64671ed365d%2Fpreprocess%20-%20Copy.PNG?generation=1695837214492045&alt=media" alt="">

    Learn more at our GitHub repo!

    Citation

    If this dataset helped your research, please cite the following articles:

    Cardozo, O., Ojeda, V., Parra, R., Mello-Román, J. C., Noguera, J. L. V., García-Torres, M. et al. (2023). Dataset of fundus images for the diagnosis of ocular toxoplasmosis. Data in Brief, 48, 109056.

    @article{cardozo2023dataset, title={Dataset of fundus images for the diagnosis of ocular toxoplasmosis}, author={Cardozo, Olivia and Ojeda, Verena and Parra, Rodrigo and Mello-Rom{\'a}n, Julio C{\'e}sar and Noguera, Jos{\'e} Luis V{\'a}zquez and Garc{\'\i}a-Torres, Miguel and Divina, Federico and Grillo, Sebastian A and Villalba, Cynthia and Facon, Jacques and others}, journal={Data in Brief}, volume={48}, pages={109056}, year={2023}, publisher={Elsevier} }

    Alam, S. S., Shuvo, S. B., Ali, S. N., Ahmed, F., Chakma, A., & Jang, Y. M. (2023). Benchmarking Deep Learning Frameworks for Automated Diagnosis of Ocular Toxoplasmosis: A Comprehensive Approach to Classification and Segmentation. arXiv preprint arXiv:2305.10975.

    @article{alam2023benchmarking, title={Benchmarking Deep Learning Frameworks for Automated Diagnosis of Ocular Toxoplasmosis: A Comprehensive Approach to Classification and Segmentation}, author={Alam, Syed Samiul and Shuvo, Samiul Based and Ali, Shams Nafisa and Ahmed, Fardeen and Chakma, Arbil and Jang, Yeong Min}, journal={arXiv preprint arXiv:2305.10975}, year={2023} }

  6. Twitter users in the United States 2019-2028

    • statista.com
    Updated Jul 30, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista Research Department (2025). Twitter users in the United States 2019-2028 [Dataset]. https://www.statista.com/topics/3196/social-media-usage-in-the-united-states/
    Explore at:
    Dataset updated
    Jul 30, 2025
    Dataset provided by
    Statistahttp://statista.com/
    Authors
    Statista Research Department
    Area covered
    United States
    Description

    The number of Twitter users in the United States was forecast to continuously increase between 2024 and 2028 by in total 4.3 million users (+5.32 percent). After the ninth consecutive increasing year, the Twitter user base is estimated to reach 85.08 million users and therefore a new peak in 2028. Notably, the number of Twitter users of was continuously increasing over the past years.User figures, shown here regarding the platform twitter, have been estimated by taking into account company filings or press material, secondary research, app downloads and traffic data. They refer to the average monthly active users over the period.The shown data are an excerpt of Statista's Key Market Indicators (KMI). The KMI are a collection of primary and secondary indicators on the macro-economic, demographic and technological environment in up to 150 countries and regions worldwide. All indicators are sourced from international and national statistical offices, trade associations and the trade press and they are processed to generate comparable data sets (see supplementary notes under details for more information).Find more key insights for the number of Twitter users in countries like Canada and Mexico.

  7. Number of LinkedIn users in the United Kingdom 2019-2028

    • statista.com
    Updated Nov 22, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista Research Department (2024). Number of LinkedIn users in the United Kingdom 2019-2028 [Dataset]. https://www.statista.com/topics/3236/social-media-usage-in-the-uk/
    Explore at:
    Dataset updated
    Nov 22, 2024
    Dataset provided by
    Statistahttp://statista.com/
    Authors
    Statista Research Department
    Area covered
    United Kingdom
    Description

    The number of LinkedIn users in the United Kingdom was forecast to continuously increase between 2024 and 2028 by in total 1.5 million users (+4.51 percent). After the eighth consecutive increasing year, the LinkedIn user base is estimated to reach 34.7 million users and therefore a new peak in 2028. User figures, shown here with regards to the platform LinkedIn, have been estimated by taking into account company filings or press material, secondary research, app downloads and traffic data. They refer to the average monthly active users over the period and count multiple accounts by persons only once.The shown data are an excerpt of Statista's Key Market Indicators (KMI). The KMI are a collection of primary and secondary indicators on the macro-economic, demographic and technological environment in up to 150 countries and regions worldwide. All indicators are sourced from international and national statistical offices, trade associations and the trade press and they are processed to generate comparable data sets (see supplementary notes under details for more information).

  8. Barro Colorado whole-island aerial photogrammetry products for 2018-2023.

    • smithsonian.figshare.com
    bin
    Updated Dec 15, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Milton Garcia; Vicente Vasquez; Helene C. Muller-Landau (2023). Barro Colorado whole-island aerial photogrammetry products for 2018-2023. [Dataset]. http://doi.org/10.25573/data.24757284.v4
    Explore at:
    binAvailable download formats
    Dataset updated
    Dec 15, 2023
    Dataset provided by
    Smithsonian Tropical Research Institute
    Authors
    Milton Garcia; Vicente Vasquez; Helene C. Muller-Landau
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Barro Colorado Island
    Description

    Title: Barro Colorado whole-island aerial photogrammetry products for 2018-2023Authors: Milton García, Vicente Vásquez, and Helene C. Muller-LandauData citation and usePlease cite this dataset as follows:Garcia, M., V. Vásquez, and H. C. Muller-Landau (2023). Barro Colorado whole-island aerial photogrammetry products for 2018-2023. (Version 4). Smithsonian Tropical Research Institute. https://doi.org/10.25573/data.24757284This data is licensed as CC BY 4.0 and is thus freely available for reuse with proper citation. We ask that data users share any resulting publications, preprints, associated analysis code, and derived data products with us by emailing mullerh@si.edu. We are open to contributing our expert knowledge of the study site and datasets to projects that use these data; please direct queries regarding potential collaboration to Helene Muller-Landau, mullerh@si.edu.Note that this dataset is part of a collection of Panama UAV data on Smithsonian Figshare, which can be viewed at https://smithsonian.figshare.com/projects/Panama_Forest_Landscapes_UAV/115572Additional information about this research can be found at the Muller-Landau lab website at https://hmullerlandau.com/Data DescriptionThis dataset is part of a larger initiative monitoring forests in Panama using drones (unoccupied aerial vehicles), an initiative led by Dr. Helene Muller-Landau at the Smithsonian Tropical Research Institute. As part of this initiative, we have been collecting repeat imagery of all 1543 ha of Barro Colorado Island (BCI), Panama, since June 2015 (see Cushman et al. 2022a for data products for 2015, 2018, and 2020). The dataset published here encompasses a total of 8 flight missions between June 2018 and June 2023, including in June or July of every year, and additional missions in February and March 2023. The June-July missions were timed to capture the flowering of the canopy emergent tree species Dipteryx oleifera; the February and March missions were timed to capture flowering of the canopy tree Jacaranda copaia. Dipteryx flowers are purplish pink, and Jacaranda flowers are bluish purple.Flights were conducted using an eBee senseFly drone and a S.O.D.A camera having a resolution of 20 megapixels, at a fixed elevation of 601 meters above sea level, and thus 430-575 m above ground, and ~390-575 m above the top of the canopy (ground elevation ranges from 26 to 171 m, and canopy height ranges from 0 to 55 m). Flights were conducted with lateral overlap of 77% and along-path overlap of 77%, which translated to 170 m between flight lines and 113 m between photos along a flight path.The drone imagery was processed independently for each date using Agisoft Metashape Pro 2.0 Python API (Agisoft LLC), employing a standardized workflow (Vasquez 2023). Key parameters for this processing included highest setting for photo alignment, medium setting for point cloud construction, and aggressive point filtering; for additional details, see Vasquez (2023).The dataset comprises the following for each flight mission:• Orthomosaic: an orthorectified image with three bands and an empty alpha band in .tif format.• DSM: a digital surface model with a single band in .tif format.• Pointcloud: in ply file extension and with coordinates reference system UTM 17 N system (EPSG:32617).• Processing report: containing all processing parameters utilized in Agisoft Metashape Pro 2.0.Note that these data products have NOT been aligned across missions and contain substantial errors of alignment. Cushman et al. (2022b) employed tiling and iterative closest point algorithms to align the point clouds for 2015, 2018, and 2020 to 2009 airborne lidar data, create associated digital surface models, and then differentiate these to quantify canopy disturbance patterns. Her original R code is available at Cushman et al. (2022a); it is based on a number of packages that have since been retired. An updated version of this code is available on GitHub at https://github.com/PanamaForestGEO/DroneCodeBCIwide/tree/main/scripts.Metadata:Detailed metadata on the products and the raw images is provided in the comma separated values (.csv) files. The data products have not been altered and maintain all the metadata as they were exported from Agisoft Metashape 2.0.Orthomosaics• N-S extension: 40806 px - 46806 px• E-W extension: 45679 px - 48506 px• Bands: 4• Data type: Unsigned integer 8 bit• Cell size x: 0.138498 m - 0.143903 m• Cell size y: 0.138498 m - 0.143903 m• Format: GeoTIFF• Coordinate reference system: UTM 17 N, EPSG:32617• No data value: None• Bottom left corner x coordinate: 623278.3334646526 - 623664.4922428882• Bottom left corner y coordinate: 1015131.5292435415 - 1015660.597072851DSM’s• N-S extension: 10202 px - 11702 px• E-W extension: 11420 px - 13548 px• Bands: 1• Data type: Float 32 bit• Cell size x: 0.553992 m - 0.6 m• Cell size y: 0.553992 m - 0.6 m• Format: GeoTIFF• Coordinate reference system: UTM 17 N, EPSG:32617• No data value: -32767• Bottom left corner x coordinate: 622774.186991835 - 623664.492242888• Bottom left corner y coordinate: 1015131.57561176 - 1015754.64117185Point clouds• Number of points: 116325422 points - 130890311 points• Colors: RGB normalized 0-1.• Coordinate reference system: UTM 17 N, EPSG:32617.• Format: Polygon File Format (Stanford Triangle Format).NotesThe BCI_whole_2018_08_25_EBEE_dipteryx mission encompassed three different dates: 2018_06_14, 2018_06_21, and 2018_08_25. Distinct flight lines were conducted on each date, and the products published here are a composite of images from these flights.The BCI_whole_2019_06_27_EBEE_dipteryx mission features raw images taken on 2019_06_19, 2019_06_22, and 2019_06_24. However, according to the pilot's notes, the flight itself occurred in 2019_06_27, hence the naming convention for the flight.The BCI_whole_2023_03_18_EBEE_jacaranda mission's raw image dates spanned 2023_03_16, 2023_03_18, and 2023_03_18. In line with standard practice, we name the data product after the last date of data collection.File naming schemeWe provide a .zip file for each mission with the following naming convention: Macrosite_site_year_month_day_drone_missiontype. Each orthomosaic follow the naming convention Macrosite_site_year_month_day_orthomosaic_missiontype. Each point cloud follow the convention Macrosite_site_year_month_day_cloud_missiontype. Each DSM follow the convention Macrosite_site_year_month_day_dsm_missiontype.Author contributionsMG led the drone imagery collection. VV processed the drone imagery. HCM conceived the study, wrote the grant proposals to obtain funding, and supervised the research. VV and HCM wrote the data description.AcknowledgmentsMilton Solano assisted with drone data management. Pablo Ramos, Paulino Villareal, and others assisted with drone data collection. Funding and/or in-kind support was provided by the Smithsonian Institution Scholarly Studies grant program (HCM), the Smithsonian Institution Equipment fund (HCM), Smithsonian ForestGEO, and the Smithsonian Tropical Research Institute.ReferencesAgisoft LLC. (n.d.). Metashape Python Reference Version (release 2.0.4). agisoft.com. Agisoft LLC. https://www.agisoft.com/pdf/metashape_python_api_2_0_4.pdf.Cushman, K. C., H. C. Muller-Landau, M. Detto, and M. Garcia. 2022a. Datasets for “Soils and topography control natural disturbance rates and thereby forest structure in a lowland tropical landscape". Smithsonian Tropical Research Institute. Smithsonian Figshare. https://doi.org/10.25573/data.17102600.v1.Cushman, K. C., M. Detto, M. García, and H. C. Muller-Landau. 2022b. Soils and topography control natural disturbance rates and thereby forest structure in a lowland tropical landscape. Ecology Letters, 25: 1126-1138. https://doi.org/https://doi.org/10.1111/ele.13978Vasquez, V. (2023). P-polycephalum/ForestLandscapes: v0.0.1-beta (v0.0.1-beta). Zenodo. https://doi.org/10.5281/zenodo.10328609

  9. Instagram users in the United Kingdom 2019-2028

    • statista.com
    Updated Nov 22, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista Research Department (2024). Instagram users in the United Kingdom 2019-2028 [Dataset]. https://www.statista.com/topics/3236/social-media-usage-in-the-uk/
    Explore at:
    Dataset updated
    Nov 22, 2024
    Dataset provided by
    Statistahttp://statista.com/
    Authors
    Statista Research Department
    Area covered
    United Kingdom
    Description

    The number of Instagram users in the United Kingdom was forecast to continuously increase between 2024 and 2028 by in total 2.1 million users (+7.02 percent). After the ninth consecutive increasing year, the Instagram user base is estimated to reach 32 million users and therefore a new peak in 2028. Notably, the number of Instagram users of was continuously increasing over the past years.User figures, shown here with regards to the platform instagram, have been estimated by taking into account company filings or press material, secondary research, app downloads and traffic data. They refer to the average monthly active users over the period and count multiple accounts by persons only once.The shown data are an excerpt of Statista's Key Market Indicators (KMI). The KMI are a collection of primary and secondary indicators on the macro-economic, demographic and technological environment in up to 150 countries and regions worldwide. All indicators are sourced from international and national statistical offices, trade associations and the trade press and they are processed to generate comparable data sets (see supplementary notes under details for more information).

  10. Pinterest users in the United Kingdom 2019-2028

    • statista.com
    Updated Nov 22, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista Research Department (2024). Pinterest users in the United Kingdom 2019-2028 [Dataset]. https://www.statista.com/topics/3236/social-media-usage-in-the-uk/
    Explore at:
    Dataset updated
    Nov 22, 2024
    Dataset provided by
    Statistahttp://statista.com/
    Authors
    Statista Research Department
    Area covered
    United Kingdom
    Description

    The number of Pinterest users in the United Kingdom was forecast to continuously increase between 2024 and 2028 by in total 0.3 million users (+3.14 percent). After the ninth consecutive increasing year, the Pinterest user base is estimated to reach 9.88 million users and therefore a new peak in 2028. Notably, the number of Pinterest users of was continuously increasing over the past years.User figures, shown here regarding the platform pinterest, have been estimated by taking into account company filings or press material, secondary research, app downloads and traffic data. They refer to the average monthly active users over the period and count multiple accounts by persons only once.The shown data are an excerpt of Statista's Key Market Indicators (KMI). The KMI are a collection of primary and secondary indicators on the macro-economic, demographic and technological environment in up to 150 countries and regions worldwide. All indicators are sourced from international and national statistical offices, trade associations and the trade press and they are processed to generate comparable data sets (see supplementary notes under details for more information).

  11. Reddit users in the United States 2019-2028

    • statista.com
    Updated Jul 30, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista Research Department (2025). Reddit users in the United States 2019-2028 [Dataset]. https://www.statista.com/topics/3196/social-media-usage-in-the-united-states/
    Explore at:
    Dataset updated
    Jul 30, 2025
    Dataset provided by
    Statistahttp://statista.com/
    Authors
    Statista Research Department
    Area covered
    United States
    Description

    The number of Reddit users in the United States was forecast to continuously increase between 2024 and 2028 by in total 10.3 million users (+5.21 percent). After the ninth consecutive increasing year, the Reddit user base is estimated to reach 208.12 million users and therefore a new peak in 2028. Notably, the number of Reddit users of was continuously increasing over the past years.User figures, shown here with regards to the platform reddit, have been estimated by taking into account company filings or press material, secondary research, app downloads and traffic data. They refer to the average monthly active users over the period and count multiple accounts by persons only once. Reddit users encompass both users that are logged in and those that are not.The shown data are an excerpt of Statista's Key Market Indicators (KMI). The KMI are a collection of primary and secondary indicators on the macro-economic, demographic and technological environment in up to 150 countries and regions worldwide. All indicators are sourced from international and national statistical offices, trade associations and the trade press and they are processed to generate comparable data sets (see supplementary notes under details for more information).Find more key insights for the number of Reddit users in countries like Mexico and Canada.

  12. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
BwandoWando (2025). 🇺🇸 Charlie Kirk(†) Twitter/ 𝕏 Dataset [Dataset]. http://doi.org/10.34740/kaggle/ds/8259158
Organization logo

🇺🇸 Charlie Kirk(†) Twitter/ 𝕏 Dataset

🇺🇸 Charlie Kirk(†) Twitter/ 𝕏 Dataset

Explore at:
CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
Dataset updated
Sep 28, 2025
Dataset provided by
Kaggle
Authors
BwandoWando
License

Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically

Description

Who is Charlie Kirk?

https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1842206%2F9ff49a3bb052e339eb85a66dca611f6c%2Fcharlie-kirk-turning-point2-91025-91025-a19b6183557949938f0dc01df2c33a28.jpg?generation=1757731111497297&alt=media" alt="">

Charles James Kirk (October 14, 1993 – September 10, 2025) was an American conservative political activist, author, and media personality. He co-founded the organization Turning Point USA (TPUSA) in 2012 and was its executive director. He was the chief executive officer of Turning Point Action (TPAction) and a member of the Council for National Policy (CNP). In his later years, he was one of the most prominent voices of the populist MAGA movement and exemplified the growth of Christian nationalism in the Republican Party.

From: https://en.wikipedia.org/wiki/Charlie_Kirk

CBS News' "Who was Charlie Kirk?"

https://www.youtube.com/watch?v=0xngCgJnO5E" alt="">

Death

On September 10, 2025, while on stage at Utah Valley University in Orem, Utah, for a TPUSA event, "The American Comeback Tour", Kirk was fatally shot in the neck. The shooting took place at 12:23 p.m. MDT (18:23 UTC), around 20 minutes after the event began, in front of an audience of about 3,000 people.

From: https://en.wikipedia.org/wiki/Charlie_Kirk

Coverage of this Dataset

  • I queried tweets with either #CharlieKirk or "Charlie Kirk" in them within the last 36 hours.

Important Note

  • All tagged usernames (ex: @username) and forms of Ids are obfuscated and replaced with a unique hashid value based on original value retaining data integrity
  • Tagged usernames that have been banned, suspended, or deleted from the platform are still obfuscated

"Well-known" authors

I added a file to denote users who have posted tweets about the topic that have either characteristic(s) - Blue-certified accounts with at least 10K followers - Non-Blue-certified accounts with at least 50K followers

This is to help map back and include additional context on who these users that are being tagged or are creating the tweets

Source

I signed up for a trial with https://twitterapi.io/ , check it out!

Image

Credit : OLIVIER TOURON/ AFP via Getty

Search
Clear search
Close search
Google apps
Main menu