47 datasets found
  1. R

    Mice Dataset

    • universe.roboflow.com
    zip
    Updated Sep 30, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mice (2024). Mice Dataset [Dataset]. https://universe.roboflow.com/mice-4mz4h/mice-alsxi/dataset/3
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 30, 2024
    Dataset authored and provided by
    Mice
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Variables measured
    Mouse Bounding Boxes
    Description

    Mice

    ## Overview
    
    Mice is a dataset for object detection tasks - it contains Mouse annotations for 509 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [Public Domain license](https://creativecommons.org/licenses/Public Domain).
    
  2. Database of Physiological Parameters for Early Life Rats and Mice

    • catalog.data.gov
    • datasets.ai
    • +1more
    Updated Jan 11, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. EPA Office of Research and Development (ORD) - National Center for Environmental Assessment (NCEA) (2021). Database of Physiological Parameters for Early Life Rats and Mice [Dataset]. https://catalog.data.gov/dataset/database-of-physiological-parameters-for-early-life-rats-and-mice
    Explore at:
    Dataset updated
    Jan 11, 2021
    Dataset provided by
    United States Environmental Protection Agencyhttp://www.epa.gov/
    Description

    The Database of Physiological Parameters for Early Life Rats and Mice provides information based on scientific literature about physiological parameters. Modelers are encouraged to use the database as a starting point for researching age-specific parameter values needed in their models.

  3. A standardized and reproducible method to measure decision-making in mice:...

    • figshare.com
    png
    Updated Feb 7, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    International Brain Laboratory (2020). A standardized and reproducible method to measure decision-making in mice: Data [Dataset]. http://doi.org/10.6084/m9.figshare.11636748.v7
    Explore at:
    pngAvailable download formats
    Dataset updated
    Feb 7, 2020
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    International Brain Laboratory
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Behavioral data associated with the IBL paper: A standardized and reproducible method to measure decision-making in mice.This data set contains contains 3 million choices 101 mice across seven laboratories at six different research institutions in three countries obtained during a perceptual decision making task.When citing this data, please also cite the associated paper: https://doi.org/10.1101/2020.01.17.909838This data can also be accessed using DataJoint and web browser tools at data.internationalbrainlab.orgAdditionally, we provide a Binder hosted interactive Jupyter notebook showing how to access the data via the Open Neurophysiology Environment (ONE) interface in Python : https://mybinder.org/v2/gh/int-brain-lab/paper-behavior-binder/master?filepath=one_example.ipynbFor more information about the International Brain Laboratory please see our website: www.internationalbrainlab.comBeta Disclaimer. Please note that this is a beta version of the IBL dataset, which is still undergoing final quality checks. If you find any issues or inconsistencies in the data, please contact us at info+behavior@internationalbrainlab.org .

  4. R

    Mice Dataset

    • universe.roboflow.com
    zip
    Updated Sep 5, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Agares (2025). Mice Dataset [Dataset]. https://universe.roboflow.com/agares/mice-s2892/dataset/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 5, 2025
    Dataset authored and provided by
    Agares
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Micehealth Polygons
    Description

    Mice

    ## Overview
    
    Mice is a dataset for instance segmentation tasks - it contains Micehealth annotations for 1,068 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  5. R

    Mice Model Dataset

    • universe.roboflow.com
    zip
    Updated Sep 2, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    titan (2023). Mice Model Dataset [Dataset]. https://universe.roboflow.com/titan-jk1qh/mice-model
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 2, 2023
    Dataset authored and provided by
    titan
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Mice Bounding Boxes
    Description

    Mice Model

    ## Overview
    
    Mice Model is a dataset for object detection tasks - it contains Mice annotations for 974 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  6. Tree Functional Trait Application Project v1

    • figshare.com
    xlsx
    Updated Dec 10, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Michael Belluau; Élise Bouchard; Mégane Déziel; Orane Mordacq; Christian Messier; Alain Paquette (2025). Tree Functional Trait Application Project v1 [Dataset]. http://doi.org/10.6084/m9.figshare.14039504.v2
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Dec 10, 2025
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Michael Belluau; Élise Bouchard; Mégane Déziel; Orane Mordacq; Christian Messier; Alain Paquette
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Tree Functional Trait Application Project v1.Corresponding authors:alain.paquette@gmail.combelluaumichael@gmail.comOur dataset was originally created to offer scientists and the public a “ready-to-use” functional trait database for trees. This dataset is the first step within a larger project to create a standardized tree functional trait database for applied projects.We aim to offer a clean and uniform database of traits with clear selection criteria for its retained values : that the trait value is original and can be traced back to a single publication (duplicates are removed); that the trait was measured on trees growing in their natural environment (not experimental); that measurement units are correct; that all remaining possible outliers are verified manually by going back to the original publication for confirmation.The dataset consists (currently) of 207 tree species (based on the common species found in Montreal, Canada), of which 110 had values for all four traits. For the remaining species, 45 had only a single missing trait value and 52 had more than one missing trait value. In this first version of the database, we selected four functional traits commonly used in ecology, known for their biological and ecological importance with regards to ecological strategies : leaf nitrogen content, specific leaf area, seed mass, and wood density (Chave et al., 2009; Díaz et al., 2016; Reich, 2014; Wright et al., 2004). The data were obtained from the TRY database (Kattge et al., 2011), the Kew Seed Information Database, and from the literature, and were triaged to ensure that only values which matched our criteria (see above) were retained. Data for these four traits, in addition to a further binary distinction between gymnosperms and angiosperms, were used for afunctional grouping analysis project on Montreal’s tree diversity. The aim of this project was to create functionally relevant species groups in order to improve functional diversity in urban areas. For this project, some data imputations were first performed to complete the dataset, adding trait values for the 45 species missing only a single value. Missing values for seed mass were calculated using the average seed mass of the genus. For the other three traits, missing values were calculated using the multivariate imputation by chained equations (MICE) procedure (Azur, Stuart, Frangakis, & Leaf, 2011). Note that other approaches can be used for filling-in missing values.To calculate trait dissimilarity matrices, we used Gower’s distance for each pairwise combination of the 155 tree species in the dataset (110 complete species + 45 imputed species). We chose this distance metric because it can handle both quantitative and qualitative variables (Pavoine, Vallet, Dufour, Gachet, & Daniel, 2009). We then analyzed the dissimilarity matrices using an agglomerative hierarchical cluster analysis and Ward’s method. Hierarchical clustering examines the similarity between pairs of data points whereas the Ward D method seeks to minimize the within-cluster variance (in order to obtain the most homogeneous clusters possible) and, as a consequence, to maximize the between-cluster variation (in order to obtain the most dissimilar clusters) at each clustering stage. The cut-off for the number of clusters was determined by an average silhouette width analysis followed by a biological interpretation of the clusters. Four clusters (groups) of functionally distinct tree species were thus retained and three of these groups could then be divided into sub-groups, giving eight sub-groups (Figure B1). Again, other methods can be used to achieve functional groups.An additional 52 species found in Montreal had two or three missing trait values (out of four) and could therefore not be included in the multivariate imputation method (due to the large uncertainty in the imputation method), so we proceeded differently for those species. Once the functional groups and sub-groups have been identified, it is possible to assign a group (or sub-group) to these species from a similarity analysis. Using a custom cosine similarity, modified to handle missing values, it is possible to calculate the similarity between a species and the different functional groups (or sub-groups). Since they are in different groups, gymnosperm and angiosperm similarities were calculated separately. We could therefore assign the group with the highest similarity to all remaining 52 species.All calculations were performed in R version 3.5.2 (R Core Team, 2019) using the function mice from the mice package (van Buuren & Groothuis-Oudshoorn, 2011), the functions agnes, daisy and pam implemented within the cluster package (Maechler et al. 2019). The custom cosine function is provided below : ReferencesAzur, M. J., Stuart, E. A., Frangakis, C., & Leaf, P. J. (2011). Multiple imputation by chained equations: What is it and how does it work? International Journal of Methods in Psychiatric Research, 20(1), 40–49. https://doi.org/10.1002/mpr.329Chave, J., Coomes, D., Jansen, S., Lewis, S. L., Swenson, N. G., & Zanne, A. E. (2009). Towards a worldwide wood economics spectrum. Ecology Letters, 12(4), 351–366. https://doi.org/10.1111/j.1461-0248.2009.01285.xDíaz, S., Kattge, J., Cornelissen, J. H. C., Wright, I. J., Lavorel, S., Dray, S., … Gorné, L. D. (2016). The global spectrum of plant form and function. Nature, 529(7585), 167–171. https://doi.org/10.1038/nature16489Kattge, J., Díaz, S., Lavorel, S., Prentice, I. C., Leadley, P., Bönisch, G., … Wirth, C. (2011). TRY - a global database of plant traits. Global Change Biology, 17(9), 2905–2935. https://doi.org/10.1111/j.1365-2486.2011.02451.xPavoine, S., Vallet, J., Dufour, A. B., Gachet, S., & Daniel, H. (2009). On the challenge of treating various types of variables: Application for improving the measurement of functional diversity. Oikos, 118(3), 391–402. https://doi.org/10.1111/j.1600-0706.2008.16668.xR Core Team. (2019). R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing. Retrieved from http://www.r-project.org/Reich, P. B. (2014). The world-wide “fast-slow” plant economics spectrum: A traits manifesto. Journal of Ecology, 102(2), 275–301. https://doi.org/10.1111/1365-2745.12211van Buuren, S., & Groothuis-Oudshoorn, K. (2011). mice: Multivariate imputation by chained equations in R. Journal of Statistical Software, 45(3), 1–67. https://doi.org/10.18637/jss.v045.i03Wright, I. J., Reich, P. B., Westoby, M., Ackerly, D. D., Baruch, Z., Bongers, F., … Villar, R. (2004). The worldwide leaf economics spectrum. Nature, 428(6985), 821–827. https://doi.org/10.1038/nature02403R script# Define function (cosine_na) that perform cosine similarity, taken from package lsa and modified to handle missing values.cosine_na

  7. f

    Visual discrimination of natural scenes by mice

    • datasetcatalog.nlm.nih.gov
    • figshare.com
    Updated Mar 24, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hira, Riichiro; Smith, Spencer; Yu, Yiyi (2018). Visual discrimination of natural scenes by mice [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0000723565
    Explore at:
    Dataset updated
    Mar 24, 2018
    Authors
    Hira, Riichiro; Smith, Spencer; Yu, Yiyi
    Description

    Mice use vision to navigate in natural environments. However, there are no publicly available data sets on how well mice can discriminate natural scenes. Here, we provide data on a natural scene discrimination task for mice using an automated touchscreen system. In this two-alternative, forced-choice task, two natural scenes are displayed simultaneously on a touchscreen. One image is the target natural scene, and the other is one of several distractor natural scenes, which vary in similarity to the target. Mice are trained to touch the target image to receive a reward. We provide behavior data for nine mice, which includes correct rates and response times. We also provide the image files used in the task, and computer code to reproduce or extend the analysis we have previously presented. This data set can be used to test hypotheses about which image features influence mouse behavior in the task and to provide constraining data for models of the mouse visual system.

  8. R

    Breathe Whole Mice Dataset

    • universe.roboflow.com
    zip
    Updated Aug 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Devins workspace (2025). Breathe Whole Mice Dataset [Dataset]. https://universe.roboflow.com/devins-workspace-3xiyn/breathe-whole-mice-dm6a2/dataset/4
    Explore at:
    zipAvailable download formats
    Dataset updated
    Aug 22, 2025
    Dataset authored and provided by
    Devins workspace
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Mice Polygons
    Description

    Breathe Whole Mice

    ## Overview
    
    Breathe Whole Mice is a dataset for instance segmentation tasks - it contains Mice annotations for 793 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  9. Data from: Transcriptional analysis of dorsal skin from mice flown on the...

    • data.nasa.gov
    • gimi9.com
    • +1more
    Updated Apr 1, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nasa.gov (2025). Transcriptional analysis of dorsal skin from mice flown on the RR-7 mission [Dataset]. https://data.nasa.gov/dataset/transcriptional-analysis-of-dorsal-skin-from-mice-flown-on-the-rr-7-mission-3b79f
    Explore at:
    Dataset updated
    Apr 1, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    The objective of the Rodent Research-7 mission (RR-7) was to study the impact of the space environment on the gut microbiota of two strains of mice and how any changes in-turn affect the immune system, metabolic system, and circadian or daily rhythms. To this end, ten 11-week-old female C57BL/6J and ten 11-week-old female C3H/HeJ mice were flown to the International Space Station on June 29, 2018 on SpaceX-15 and housed in two Rodent Habitats. Samples of food, swabs from living surfaces, and fecal pellets were collected from each animal before launch and regularly during the mission. The mission also involved extended video collection (48 hr video segments per Habitat) to monitor circadian rhythms, and on-orbit mass measurement. After 25 days on-orbit, half of the mice of each strain were euthanized on the ISS with Ketamine/Xylazine/Acepromazine and cardiac puncture, after which carcasses were segmented in three sections and preserved in RNA later. After 75-76 days the remaining 5 animals from each group were euthanized and processed in the same manner. The 25-day dissected carcasses returned on SpX-15, and the 75-day dissected carcasses returned on SpX-16. In addition to the Flight group, three ground control groups were also part of the study: Basal (representing the pre-launch state), Vivarium (standard vivarium housing for the same duration of time as flight), and Ground (same habitat in the International Space Station Environment Simulator, ISSES). Twenty mice (10 of each strain) were included in each of these control groups, which were euthanized and processed on the same schedule and in the same manner as the flight samples. Dissections for tissues from all experimental groups were completed by the PI groups along with NASA's Biospecimen Sharing Program in February 2019. GeneLab received dorsal skin samples from forty C57BL/6J mice: 10 Basal, 5 Ground (25 days), 5 Ground (75 days), 5 Flight (25 days), 5 Flight (75 days), 5 Vivarium (25 days), 5 Vivarium (75 days). GeneLab received dorsal skin samples from forty C3H/HeJ mice: 10 Basal, 5 Ground (25 days), 5 Ground (75 days), 5 Flight (25 days), 5 Flight (75 days), 5 Vivarium (25 days), 5 Vivarium (75 days). From these skin samples, RNA was extracted, libraries generated (stranded, ribodepleted) and sequenced (target 60 M clusters at PE 98 bp).

  10. Mouse Viral Infection Study Dataset

    • kaggle.com
    zip
    Updated Mar 27, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Batuhan Şahan (2025). Mouse Viral Infection Study Dataset [Dataset]. https://www.kaggle.com/datasets/brsahan/mouse-viral-infection-study-dataset
    Explore at:
    zip(7874 bytes)Available download formats
    Dataset updated
    Mar 27, 2025
    Authors
    Batuhan Şahan
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    This dataset comes from a controlled laboratory study of viral infections in mice. It contains gene expression data and immune response measurements collected from both infected and non-infected mice.

    The main goal is to explore whether machine learning algorithms, particularly Support Vector Machines (SVM), can accurately classify the infection status of each mouse based on these biological features.

    This dataset is ideal for binary classification, feature selection, and biomarker discovery projects in a biomedical context.

    Includes:

    A CSV file with labeled samples

    A sample Jupyter Notebook using SVM for classification

    Inspiration: This dataset was organized to help students and researchers understand how gene-level biological responses can be modeled computationally to detect infection.

  11. R

    Mice Tracker Dataset

    • universe.roboflow.com
    zip
    Updated Sep 27, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Audic (2025). Mice Tracker Dataset [Dataset]. https://universe.roboflow.com/audic-ymr95/mice-tracker-ft7ck
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 27, 2025
    Dataset authored and provided by
    Audic
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Mice Bounding Boxes
    Description

    Mice Tracker

    ## Overview
    
    Mice Tracker is a dataset for object detection tasks - it contains Mice annotations for 2,657 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  12. o

    Part II: A CLIMBER Meta-analysis, recovery time measured by behavioral...

    • odc-sci.org
    Updated Jul 12, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Emma Iorio; Alireza Khanteymoori; Kenneth Fond; Lex Maliga Davis; Jan Schwab; Adam Ferguson; Abel Torres-Espin; Ralf Watzlawick; Emma Iorio; Alireza Khanteymoori; Kenneth Fond; Lex Maliga Davis; Jan Schwab; Adam Ferguson; Abel Torres-Espin; Ralf Watzlawick (2024). Part II: A CLIMBER Meta-analysis, recovery time measured by behavioral outcome tests after contusive injuries on various spinal levels in rats and mice of both sexes from 7 public datasets (Individual Animal Data (IAD)) corresponding to Part 1 Literature-Extracted Data [Dataset]. http://doi.org/10.34945/F5J59P
    Explore at:
    Dataset updated
    Jul 12, 2024
    Dataset provided by
    The Ohio State University
    UCSF
    University of Waterloo
    University of Freiburg
    Authors
    Emma Iorio; Alireza Khanteymoori; Kenneth Fond; Lex Maliga Davis; Jan Schwab; Adam Ferguson; Abel Torres-Espin; Ralf Watzlawick; Emma Iorio; Alireza Khanteymoori; Kenneth Fond; Lex Maliga Davis; Jan Schwab; Adam Ferguson; Abel Torres-Espin; Ralf Watzlawick
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    STUDY PURPOSE: The purpose of the study was to analyze recovery after spinal cord injury through various different behavioral outcome tests. The study compared effect sizes from literature sourced (literature-extracted data (LED)) to the literatures’ corresponding publicly available raw data (individual animal data (IAD)). Random effect models and regression analyses were applied to evaluate predictors of neuro-conversion in LED versus IAD. Subgroup analyses were performed on animal sex, animal type, animal species, injury severity, injury segment and sample sizes. Publications with common injury models (contusive injuries) and standardized endpoints (open field assessments) were included in the meta-analyses. Studies that recorded open field assessments at 0-3 and 28-56 days past operation were included. This dataset includes the individual animal data (IAD) (part 2) that was collected for the study. The code to replicate our study can be found on github (https://github.com/ucsf-ferguson-lab/climber_meta_analysis2024.git). This dataset corresponds with another dataset in ODC-SCI (10.34945/F5DG6D) which contains data that was extracted directly from the 7 published articles, literature extracted data (LED). DATA COLLECTED: The individual animal data in this dataset was collected from raw data publicly available in ODC-SCI. This dataset includes merged data from 7 different publicly available datasets. Each study from the published articles includes contusion injuries with various severities and different locations, which are indicated in this dataset. Different mice and rat species are included in the dataset with both sexes. Outcome scores at different days-post operation from BBB, BMS, Grooming and Forelimb Open Field tests are also included. The behavioral outcome scores over days post operation were used to calculate effect sizes. DATA USAGE NOTES:

  13. R

    Mice Wound Dataset

    • universe.roboflow.com
    zip
    Updated Jan 23, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    MSA (2024). Mice Wound Dataset [Dataset]. https://universe.roboflow.com/msa-zflal/mice-wound
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 23, 2024
    Dataset authored and provided by
    MSA
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Variables measured
    Wound Bounding Boxes
    Description

    Mice Wound

    ## Overview
    
    Mice Wound is a dataset for object detection tasks - it contains Wound annotations for 572 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [Public Domain license](https://creativecommons.org/licenses/Public Domain).
    
  14. Dataset from Mouse 9 for Sit & Goard: Distributed and Retinotopically...

    • nih.figshare.com
    bin
    Updated May 31, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kevin Sit; Michael Goard (2023). Dataset from Mouse 9 for Sit & Goard: Distributed and Retinotopically Asymmetric Processing of Coherent Motion in Mouse Visual Cortex [Dataset]. http://doi.org/10.35092/yhjc.12459422.v1
    Explore at:
    binAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Kevin Sit; Michael Goard
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Accompanying dataset for Sit KK, Goard MJ. Distributed and Retinotopically Asymmetric Processing of Coherent Motion in Mouse Visual Cortex.This dataset contains the data for mouse 9 / 10Mouse SKKS050See all data in collection here: https://doi.org/10.35092/yhjc.c.5018363These datasets contain calcium imaging data taken from L2/3 of the primary visual cortex in mice. Mice were head fixed and passively watching a random dot kinematogram stimulus. The stimulus data vector for processing these data are also included. The accompanying code to analyze these data can be found at: https://github.com/kevinksit/CoherentMotionProjectData are either in preprocessed DFF matrices, which represent the change in neural activity compared to a baseline or raw images from the microscope. All data are processed with MATLAB 2018b.The purpose of these data is to understand the organization of neural activity in response to coherent motion in the murine visual cortex.The manuscript associated with these data is Sit KK, Goard MJ. Distributed and Retinotopically Asymmetric Processing of Coherent Motion in Mouse Visual Cortex.ABSTRACTPerception of visual motion is important for a range of ethological behaviors in mammals. In primates, specific visual cortical regions are specialized for processing of coherent visual motion. However, whether mouse visual cortex has a similar organization remains unclear, despite powerful genetic tools available for measuring population neural activity. Here, we use widefield and 2-photon calcium imaging of transgenic mice to measure mesoscale and cellular responses to coherent motion. Imaging of primary visual cortex (V1) and higher visual areas (HVAs) during presentation of natural movies and random dot kinematograms (RDKs) reveals varied responsiveness to coherent motion, with stronger responses in dorsal stream areas compared to ventral stream areas. Moreover, there is considerable anisotropy within visual areas, such that neurons representing the lower visual field are more responsive to coherent motion. These results indicate that processing of visual motion in mouse cortex is distributed heterogeneously both across and within visual areas.

  15. R

    Mice Cropping Dataset

    • universe.roboflow.com
    zip
    Updated Sep 30, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mice cropping (2024). Mice Cropping Dataset [Dataset]. https://universe.roboflow.com/mice-cropping-djuh2/mice-cropping-dbrim
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 30, 2024
    Dataset authored and provided by
    Mice cropping
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Nose Bounding Boxes
    Description

    Mice Cropping

    ## Overview
    
    Mice Cropping is a dataset for object detection tasks - it contains Nose annotations for 1,092 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  16. R

    Mice Breathe Holistic Dataset

    • universe.roboflow.com
    zip
    Updated Nov 26, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Devins workspace (2025). Mice Breathe Holistic Dataset [Dataset]. https://universe.roboflow.com/devins-workspace-3xiyn/mice-breathe-holistic-zbpj6/dataset/15
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 26, 2025
    Dataset authored and provided by
    Devins workspace
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Mice Polygons
    Description

    Mice Breathe Holistic

    ## Overview
    
    Mice Breathe Holistic is a dataset for instance segmentation tasks - it contains Mice annotations for 1,243 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  17. Does it Fart: Which animals fart?

    • kaggle.com
    zip
    Updated Nov 19, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Devastator (2022). Does it Fart: Which animals fart? [Dataset]. https://www.kaggle.com/datasets/thedevastator/a-comprehensive-study-of-animal-farts/suggestions
    Explore at:
    zip(14708 bytes)Available download formats
    Dataset updated
    Nov 19, 2022
    Authors
    The Devastator
    Description

    Does it Fart: Which animals fart?

    From Mice to elephants, find out which creatures toot and which ones don't

    About this dataset

    Have you ever wondered whether or not animals fart? It's a popular question, and one that has led to much debate. This dataset contains information on animals and whether or not they fart. It also includes description notes, verified sources, and links to papers for further reading. So if you've ever wanted to know which animals let out a little gas, this is the dataset for you!

    How to use the dataset

    This dataset can be used to answer the question: Do animals fart? The dataset contains information on animals and whether or not they fart. The data was collected from different sources and verified by experts. To use this dataset, simply look at the 'Animal' column to see the name of the animal, and the 'Does it Fart?' column to see if the animal is known to fart or not

    Research Ideas

    • To determine if an animal is likely to fart
    • To determine if an animal is likely to puke
    • To determine what animals are more likely to fart or puke

    Acknowledgements

    License

    See the dataset description for more information.

    Columns

    File: fart.csv | Column name | Description | |:----------------------|:--------------------------------------------------------------| | animal | The name of the animal. (String) | | scientific_name | The scientific name of the animal. (String) | | does_it_fart | Whether or not the animal is known to fart. (Boolean) | | description_notes | A description of the animal. (String) | | paper_link | A link to the paper where the information was found. (String) | | verified_by | The person who verified the information. (String) | | column_g | |

    File: puke.csv | Column name | Description | |:------------------|:--------------------------------------------------------------------| | Animal | The name of the animal. (String) | | Does it Puke? | Whether or not the animal is known to puke. (Boolean) | | Verified by | The source that verified the information about the animal. (String) |

    File: #DoesItFart.csv | Column name | Description | |:----------------------|:--------------------------------------------------------------------| | Verified by | The source that verified the information about the animal. (String) | | Scientific name | The scientific name of the animal. (String) | | Does it Fart? | Whether or not the animal is known to fart. (Boolean) | | Description/Notes | A description of the animal. (String) | | Paper/Link | A link to the paper where the information was found. (String) |

    File: sneeze.csv | Column name | Description | |:--------------------|:-----------------------------------------------------------------------------------| | Animal | The name of the animal. (String) | | Does it Sneeze? | Whether or not the animal is known to sneeze. (Boolean) | | Notes | Notes about the animal, including information about its diet and habitat. (String) |

  18. u

    Dataset for: Synchronized Activity in The Main and Accessory Olfactory Bulbs...

    • producciocientifica.uv.es
    • data-staging.niaid.nih.gov
    • +2more
    Updated 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pardo-Bellver, Cecília; Martínez-Bellver, Sergio; Martínez-García, Fernando; Lanuza, Enrique; Teruel-Martí, Vicent; Pardo-Bellver, Cecília; Martínez-Bellver, Sergio; Martínez-García, Fernando; Lanuza, Enrique; Teruel-Martí, Vicent (2017). Dataset for: Synchronized Activity in The Main and Accessory Olfactory Bulbs and Vomeronasal Amygdala Elicited by Chemical Signals in Freely Behaving Mice [Dataset]. https://producciocientifica.uv.es/documentos/668fc419b9e7c03b01bd4409
    Explore at:
    Dataset updated
    2017
    Authors
    Pardo-Bellver, Cecília; Martínez-Bellver, Sergio; Martínez-García, Fernando; Lanuza, Enrique; Teruel-Martí, Vicent; Pardo-Bellver, Cecília; Martínez-Bellver, Sergio; Martínez-García, Fernando; Lanuza, Enrique; Teruel-Martí, Vicent
    Description

    The dataset contains raw and primary data associated to the projects: Señales Vomeronasales Y Control Amigdalino Del Comportamiento Sociosexual: Un Modelo Experimental De La Neurobiologia Del Comportamiento Social Y Sus Alteraciones (BFU2013-47688-P) and Circuitos Neurales De La Atraccion Por Feromonas Sexuales Y La Aversion Por Señales De Enfermedad: Un Estudio Anatomico, Electrofisiologico Y Comportamental (BFU2016-77691-C2-2-P) from the Spanish Ministry of Economy, Industry and Competitiveness.

    The aim of this research is to investigate the integration of chemosensory information in mice, specifically focusing on the olfactory and vomeronasal systems. We aim to understand how the activity of both circuits is coordinated and integrated during the exploration of various stimuli by freely behaving mice. The study involves recording the electrophysiological activity in the olfactory bulbs and the vomeronasal amygdala in response to neutral and conspecific stimuli and the analysis performed to uncover the neural mechanisms and patterns of activity that mediate the integration of olfactory and vomeronasal information and ultimately influence the behavior of mice, particularly in response to conspecific cues.

    To investigate the oscillatory pattern of activity in the vomeronasal system, and compare it with that showed by the olfactory system, we have performed recordings of the Local Field Potentials (LFP) in awake, freely behaving mice to which we presented olfactory stimuli (clean bedding or geraniol), or mixed olfactory-vomeronasal stimuli (bedding soiled by females, castrated males or intact males). In each animal, the recording electrodes were located in the main olfactory bulb (MOB) and accessory olfactory bulb (AOB), as well as in the medial amygdala (Me) and posteromedial cortical amygdala (PMCo). These recording sites allow us to characterize the pattern of oscillatory activity in the main centres of the vomeronasal system, and at the same time, to evaluate whether they are different and independent from the sniffing-induced olfactory oscillations present in the MOB.

    We supply information on the provided metadata, which refer to individual raw files, and details on the specific data formats used. For further explanation please refer to the file DataSet_Overview_OlfVnInt.rtf.

    This dataset was used in the publications:

    Pardo-Bellver, C., Martínez-Bellver, S., Martínez-García, F. et al. Synchronized Activity in The Main and Accessory Olfactory Bulbs and Vomeronasal Amygdala Elicited by Chemical Signals in Freely Behaving Mice. Scientific Reports 7, 9924 (2017).

    Pardo-Bellver, C., Vila-Martin, M. E., Martínez-Bellver, S. et al. Neural activity patterns in the chemosensory network encoding vomeronasal and olfactory information in mice. Frontiers in Neuroanatomy 16 (2022).

  19. e

    Isobaric matching between runs and novel PSM-level normalization in MaxQuant...

    • ebi.ac.uk
    • data-staging.niaid.nih.gov
    • +1more
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sung-Huan Yu, Isobaric matching between runs and novel PSM-level normalization in MaxQuant strongly improve reporter ion-based quantification - Peli1 knock-out mice dataset [Dataset]. https://www.ebi.ac.uk/pride/archive/projects/PXD019881
    Explore at:
    Authors
    Sung-Huan Yu
    Variables measured
    Proteomics
    Description

    Isobaric labeling has the promise of combining high sample multiplexing with precise quantification. However, normalization issues and the missing value problem of complete n-plexes hamper quantification across more than one n-plex. Here we introduce two novel algorithms implemented in MaxQuant that substantially improve the data analysis with multiple n-plexes. First, isobaric matching between runs (IMBR) makes use of the three-dimensional MS1 features to transfer identifications from identified to unidentified MS/MS spectra between LC-MS runs in order to utilize reporter ion intensities in unidentified spectra for quantification. On typical datasets, we observe a significant gain in quantifiable n-plexesMS/MS spectra that can be used for quantification. Second, we introduce a novel PSM-level normalization, applicable to data with and without common reference channel. It is a weighted median-based method, in which the weights reflect the number of ions that were used for fragmentation. On a typical dataset, we observe complete removal of batch effects and dominance of the biological sample grouping after normalization. This dataset is one of the datasets used for the study. It is TMT 10-plex with a reference channel.

  20. o

    Data from: Terabyte-scale supervised 3D training and benchmarking dataset of...

    • idr-testing.openmicroscopy.org
    Updated Jun 19, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2023). Terabyte-scale supervised 3D training and benchmarking dataset of the mouse kidney [Dataset]. https://idr-testing.openmicroscopy.org/study/idr0147/
    Explore at:
    Dataset updated
    Jun 19, 2023
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    C57BL/6J mice were purchased from Janvier Labs (Le Genest-Saint-Isle, France) and kept in individually ventilated cages with ad libitum access to water and standard diet (Kliba Nafag 3436, Kaiseraugst, Switzerland) in 12 h light/dark cycles. Dataset 1 derives from the left kidney of a male mouse, 15 weeks of age with a body weight of 28.0 g. Dataset 2 is the right kidney of the same mouse. Dataset 3 derives from the right kidney of a female mouse, 15 weeks of age with a body weight of 22.5 g. All animal experiments were approved by the cantonal veterinary office of Zurich, Switzerland, in accordance with the Swiss federal animal welfare regulations (license numbers ZH177/13 and ZH233/15). Mice were anaesthetized with ketamine/xylazine. A blunted 21G butterfly needle was inserted retrogradely into the abdominal aorta and fixed with a ligation. The abdominal aorta and superior mesenteric artery above the renal arteries were ligated, the vena cava opened as an outlet and the kidneys were flushed with 10 ml, 37 °C phosphate-buffered saline (PBS) to remove the blood, then fixed with 50 ml 37 °C 4 % paraformaldehyde in PBS (PFA) solution at 150 mmHg hydrostatic pressure. 2.4 g of 1,3-diiodobenzene (Sigma-Aldrich, Schnelldorf, Germany) were dissolved in 7.5 g of 2-butanone (Sigma-Aldrich) and mixed with 7.5 g PU4ii resin (vasQtec, Zurich, Switzerland) and 1.3 g PU4ii hardener. The mixture was filtered through a paper filter and degassed extensively in a vacuum chamber to minimize bubble formation during polymerization, and perfused at a constant pressure of no more than 200 mmHg until the resin mixture solidified. Kidneys were excised and stored in 15 ml 4 % PFA. For scanning, they were embedded in 2 % agar in PBS in 0.5 ml polypropylene centrifugation tubes. Kidneys were quality-checked with a nanotom® m (phoenix|x-ray, GE Sensing & Inspection Technologies GmbH, Wunstorf, Germany). Samples showing insufficient perfusion or bleeding of resin into the renal capsule or sinuses were excluded. Kidneys were scanned at the ID19 tomography beamline of the European Synchrotron Radiation Facility (ESRF, Grenoble, France) using pink beam with a mean photon energy of 19 keV. Radiographs were recorded at a sample-detector distance of 28 cm with a 100 µm Ce:LuAG scintillator, 4× magnification lens and a pco.edge 5.5 camera with a 2560 × 2160 pixel array and 6.5 µm pixel size, resulting in an effective pixel size of 1.625 µm. Radiographs were acquired with a half-acquisition scheme in order to extend the field of view to 8 mm. Six height steps were recorded for each kidney, with half of the vertical field of view overlapping between each height step, resulting in fully redundant acquisition of the inner height steps. 5125 radiographs were recorded for each height step with 0.1 s exposure time, resulting in a scan time of 1 h for a whole kidney. 100 flat-field images were taken before and after each height step for flat-field correction. Images were reconstructed using the beamline’s in-house PyHST2 software, using a Paganin-filter with a low δ/β ratio of 50 to limit loss in resolution and appearance of gradients close to large vessels. Registration for stitching two half-acquisition radiographs to the full field of view was performed manually with 1 pixel accuracy. Data size for the reconstructed datasets was 1158 GB per kidney. Outliers in intensity in the recorded flat fields were segmented by noise reduction with 2D continuous curvelets, followed by thresholding to calculate radius and coordinates of the ring artefacts. The redundant acquisition of the central four height steps allowed us to replace corrupted data with a weighted average during stitching. The signals of the individual slices were zeroed in the presence of the rings, summed up and normalized by counting the number of uncorrupted signals. In the outer slices, where no redundant data was available, and in locations where rings coincided in both height steps, we employed a discrete cosine transform-based inpainting technique with a simple iterative approach, where we picked smoothing kernels progressively smaller in size and reconstructed the signal in the target areas by smoothing the signal everywhere at each iteration. The smoothed signal in the target areas was then combined with the original signal elsewhere to form a new image. In the next iteration, in turn, the new image was then smoothed to rewrite the signal at the target regions. The final inpainted signal exhibits multiple scales since different kernel widths are considered at different iterations. The alignment for stitching the six stacks was determined by carrying out manual 3D registration and double checking against pairwise stack-stack phase-correlation analysis. The stitching process reduced the dataset dimensions per kidney to 4608 × 4608 × 7168 pixels, totaling 567 GB. We performed image enhancement based on 3D discretized continuous curvelets, in a similar fashion as Starck et al., but with second generation curvelets (i.e., no Radon transform) in 3D. The enhancement was carried out globally by leveraging the Fast Fourier Transform with MPI-FFTW, considering about 100 curvelets. The “wedges” (curvelets in the spectrum) have a conical shape and cover the unit sphere in an approximately uniform fashion. For a given curvelet, a per-pixel coefficient is obtained by computing an inverse Fourier transform of its wedge and the image spectrum. We then truncated these coefficients in the image domain against a hard threshold, and forward-transformed the curvelet again into the Fourier space, modulated the curvelets with the truncated coefficients and superposed them. As a result, the pixel intensities were compressed to a substantially smaller range of values, thus helping to avoid over- and under-segmentation of large and small vessels, respectively. A threshold-based segmentation followed the image enhancement. The enhancement parameters and threshold were manually chosen by examining six randomly chosen regions of interest. Spurious islands were removed by 26-connected component analysis, and cavities were removed by 6-connected component analysis. The bulk of the processing workload, required to transform data into an actionable training set, was carried out at the Zeus cluster of the Pawsey supercomputing centre. Zeus consisted of hundreds of computing nodes featuring Intel Xeon Phi (Knights Landing) many-core CPUs, together with 96 GB of ``special’’ high-bandwidth memory (HBM/MCDRAM), as well as 128 GB of conventional DDR4 RAM. The final training and assessments were carried out at the Euler VI cluster of ETH Zurich, with two-socket nodes featuring AMD EPYC 7742 (Rome) CPUs and 512 GB of DDR4 RAM. A machine learning-based approach relying on invariant scattering convolution networks was employed to segment the glomeruli and remove perirenal fat from the blood vessel segment. For the glomerular training data, three selected regions of interest of 512 × 256 × 256 voxels in size were selected from the cortical region of one kidney (dataset 2) and segmented by a single annotator by fully manual contouring in all slices. For the fat, manual work was reduced by providing an initial semiautomatic segmentation, which the manual annotation then corrected. The training data were supplemented by additional regions of interest that contained no glomeruli or fat at all, and thus did not require manual annotation. The manual annotations were then used to train a hybrid algorithm that relied on a 3D scattering transform convolutional network topped with a dense neural network. The scattering transform relied upon ad-hoc designed 3D kernels (Morlet’s wavelet with different sizes and orientations) that uniformly covered all directions at different scales. In the scattering convolutional network, filter nonlinearities were obtained by taking the magnitude of the filter responses and convolving them again with the kernels in a cascading fashion. These nonlinearities are designed to be robust against small Lipschitz-continuous deformations of the image. In contrast to our curvelet-based image enhancement approach, we decomposed the image into cubic tiles, then applied a windowed (thus local) Fourier transform on the tiles by considering regions about twice their size around them. While it would have been possible to use a convolutional network based upon a global scattering transform, this would have produced a very large number of features that would have had to be consumed at once, leading to an intermediate footprint in the petabyte-scale, exceeding the available memory of the cluster. The scattering transform convolutional network produced a stack of a few hundred scalar feature maps per pixel. If considered as a “fiber bundle”, the feature map stack is equivariant under the symmetry group of rotations (i.e., the stack is a regular representation of the 3D rotation group SO(3)). This property can be exploited by further processing the feature maps with a dense neural network with increased parameter sharing across the hidden layers, making the output layer-invariant to rotations.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Mice (2024). Mice Dataset [Dataset]. https://universe.roboflow.com/mice-4mz4h/mice-alsxi/dataset/3

Mice Dataset

mice-alsxi

mice-dataset

Explore at:
zipAvailable download formats
Dataset updated
Sep 30, 2024
Dataset authored and provided by
Mice
License

CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically

Variables measured
Mouse Bounding Boxes
Description

Mice

## Overview

Mice is a dataset for object detection tasks - it contains Mouse annotations for 509 images.

## Getting Started

You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.

  ## License

  This dataset is available under the [Public Domain license](https://creativecommons.org/licenses/Public Domain).
Search
Clear search
Close search
Google apps
Main menu