Facebook
TwitterSpd stands for Scintillating Pad Detector, Pr...
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Particle Exir is a dataset for object detection tasks - it contains Particle annotations for 1,332 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
This dataset contains 2,000 simulated particle measurements designed to mimic detector data from high-energy physics (HEP) experiments, such as those at the Large Hadron Collider (LHC). Inspired by the "Hybrid Ensemble Approach for Particle Track Reconstruction and Classification in High-Energy Physics" research paper, it provides synthetic data for machine learning tasks like particle track reconstruction, particle type classification, and kinematic property prediction. The data includes energy, momentum, and 3D spatial coordinates (x, y, z), along with a derived distance feature, making it suitable for clustering, regression, and sequence modelling.
Facebook
TwitterU.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
Abstract ======== This data set consists of the JUNO JEDI (Jupiter Energetic-Particle Detector) uncalibrated observations, also known as EDRs. The system is made up of 3 instrument subsystems (pucks) aligned in three directions on the spinning JUNO spacecraft. The pucks each have 6 look directions (telescopes) with a time of flight (TOF) and a deposited energy detection (SSD) system. In addition, the pulse height of the signal in the TOF system can be used for energy measurement. The instruments can be operated in a variety of modes, differing in the way they use and combine these measurements. More details are available in the INSTRUMENT.CAT file and the JEDI SIS. The EDRs are organized in files covering one day of spacecraft event time (SCET). There are potentially 10 different files per puck, corresponding to the available data gathering modes: HIERSESP, HIERSISP, HIERSTOFXER, HIERSTOFXPHR, LOERSESP, LOERSISP, LOERSTOFXER, LOERSTOFXPHR, NONPTOFXER, NONPTOFXPHR. Depending on how the pucks are operated, not all data types will be available for all pucks for any particular time range.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Original source from Kaggle : https://www.kaggle.com/c/trackml-particle-identification/data
The dataset comprises multiple independent events, where each event contains simulated measurements (essentially 3D points) of particles generated in a collision between proton bunches at the Large Hadron Collider at CERN. The goal of the tracking machine learning challenge is to group the recorded measurements or hits for each event into tracks, sets of hits that belong to the same initial particle. A solution must uniquely associate each hit to one track. The training dataset contains the recorded hits, their ground truth counterpart and their association to particles, and the initial parameters of those particles. The test dataset contains only the recorded hits.
Once unzipped, the dataset is provided as a set of plain .csv files. Each event has four associated files that contain hits, hit cells, particles, and the ground truth association between them. The common prefix, e.g. event000000010, is always event followed by 9 digits.
event000000000-hits.csv
event000000000-cells.csv
event000000000-particles.csv
event000000000-truth.csv
event000000001-hits.csv
event000000001-cells.csv
event000000001-particles.csv
event000000001-truth.csv
Event hits
The hits file contains the following values for each hit/entry:
hit_id: numerical identifier of the hit inside the event.
x, y, z: measured x, y, z position (in millimeter) of the hit in global coordinates.
volume_id: numerical identifier of the detector group.
layer_id: numerical identifier of the detector layer inside the group.
module_id: numerical identifier of the detector module inside the layer.
The volume/layer/module id could in principle be deduced from x, y, z. They are given here to simplify detector-specific data handling.
Event truth
The truth file contains the mapping between hits and generating particles and the true particle state at each measured hit. Each entry maps one hit to one particle.
hit_id: numerical identifier of the hit as defined in the hits file.
particle_id: numerical identifier of the generating particle as defined in the particles file. A value of 0 means that the hit did not originate from a reconstructible particle, but e.g. from detector noise.
tx, ty, tz true intersection point in global coordinates (in millimeters) between the particle trajectory and the sensitive surface.
tpx, tpy, tpz true particle momentum (in GeV/c) in the global coordinate system at the intersection point. The corresponding vector is tangent to the particle trajectory at the intersection point.
weight per-hit weight used for the scoring metric; total sum of weights within one event equals to one.
Event particles
The particles files contains the following values for each particle/entry:
particle_id: numerical identifier of the particle inside the event.
vx, vy, vz: initial position or vertex (in millimeters) in global coordinates.
px, py, pz: initial momentum (in GeV/c) along each global axis.
q: particle charge (as multiple of the absolute electron charge).
nhits: number of hits generated by this particle.
All entries contain the generated information or ground truth.
Event hit cells
The cells file contains the constituent active detector cells that comprise each hit. The cells can be used to refine the hit to track association. A cell is the smallest granularity inside each detector module, much like a pixel on a screen, except that depending on the volume_id a cell can be a square or a long rectangle. It is identified by two channel identifiers that are unique within each detector module and encode the position, much like column/row numbers of a matrix. A cell can provide signal information that the detector module has recorded in addition to the position. Depending on the detector type only one of the channel identifiers is valid, e.g. for the strip detectors, and the value might have different resolution.
hit_id: numerical identifier of the hit as defined in the hits file.
ch0, ch1: channel identifier/coordinates unique within one module.
value: signal value information, e.g. how much charge a particle has deposited.
Additional detector geometry information
The detector is built from silicon slabs (or modules, rectangular or trapezoïdal), arranged in cylinders and disks, which measure the position (or hits) of the particles that cross them. The detector modules are organized into detector groups or volumes identified by a volume id. Inside a volume they are further grouped into layers identified by a layer id. Each layer can contain an arbitrary number of detector modules, the smallest geometrically distinct detector object, each identified by a module_id. Within each group, detector modules are of the same type have e.g. the same granularity. All simulated detector modules are so-called semiconductor sensors that are build from thin silicon sensor chips. Each module can be represented by a two-dimensional, planar, bounded sensitive surface. These sensitive surfaces are subdivided into regular grids that define the detectors cells, the smallest granularity within the detector.
Each module has a different position and orientation described in the detectors file. A local, right-handed coordinate system is defined on each sensitive surface such that the first two coordinates u and v are on the sensitive surface and the third coordinate w is normal to the surface. The orientation and position are defined by the following transformation
pos_xyz = rotation_matrix * pos_uvw + translation
that transform a position described in local coordinates u,v,w into the equivalent position x,y,z in global coordinates using a rotation matrix and and translation vector (cx,cy,cz).
volume_id: numerical identifier of the detector group.
layer_id: numerical identifier of the detector layer inside the group.
module_id: numerical identifier of the detector module inside the layer.
cx, cy, cz: position of the local origin in the global coordinate system (in millimeter).
rot_xu, rot_xv, rot_xw, rot_yu, ...: components of the rotation matrix to rotate from local u,v,w to global x,y,z coordinates.
module_t: half thickness of the detector module (in millimeter).
module_minhu, module_maxhu: the minimum/maximum half-length of the module boundary along the local u direction (in millimeter).
module_hv: the half-length of the module boundary along the local v direction (in millimeter).
pitch_u, pitch_v: the size of detector cells along the local u and v direction (in millimeter).
There are two different module shapes in the detector, rectangular and trapezoidal. The pixel detector ( with volume_id = 7,8,9) is fully built from rectangular modules, and so are the cylindrical barrels in volume_id=13,17. The remaining layers are made out disks that need trapezoidal shapes to cover the full disk.
Facebook
TwitterAttribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
A synthetically generated dataset of honey samples observed by a digital microscope. The goal of this dataset is to train machine learning models to detect the pollen found in the images. This dataset contains 500 images with three different classes of particles commonly found in honey.
The annotations are provided in YOLO format in a different directory. Each annotation is associated with the image by an unique ID. The images follow the naming convention "image_{ID}.png" and the annotations follow the convention "image_{ID}.txt".
This dataset has been created by Sonicat Systems and is published under Creative Commons Attribution Non-Commercial Share Alike license.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Particle Detection In Clusters is a dataset for object detection tasks - it contains Particle annotations for 342 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data description
Datasets generated using Key4HEP and the CLIC detector model suitable for particle flow reconstruction studies.
The datasets contain generator particles, reconstructed tracks and calorimeter hits, reconstructed Pandora PF particles and their respective links in the EDM4HEP format.
The following processes have been simulated with Pythia 8:
p8_ee_tt_ecm380: ee -> ttbar, center of mass energy at 380 GeV
p8_ee_qq_ecm380: ee -> Z* -> qqbar, center of mass energy at 380 GeV
p8_ee_ZH_Htautau: ee -> ZH -> Higgs decaying to tau leptons, center of mass energy at 380 GeV
p8_ee_WW_fullhad: ee -> WW -> W decaying hadronically, center of mass energy at 380 GeV
p8_ee_tt_ecm380_PU10: ee -> ttbar with on average 10 Poisson-distributed events from ee->gg overlayed, center of mass energy at 380 GeV
The following single particle gun samples have been generated with ddsim:
e+/e-: single electron with energy between 1 and 100 GeV
mu+/mu-: single muon with energy between 1 and 100 GeV
kaon0L: single K0L with energy between 1 and 100 GeV
neutron: single neutron with energy between 1 and 100 GeV
pi+/pi-: single charged pion with energy between 1 and 100 GeV
pi0: single neutral pion with energy between 1 and 100 GeV
gamma: single photon with energy between 1 and 100 GeV
The detector simulation has been done with Geant4, the reconstruction with Marlin interfaced via Key4HEP which includes PF reconstruction with Pandora, all using publicly available models and code.
Contents
This record includes the following files:
*_10files.tar: small archives of 10 files for each data sample, suitable for testing
dataset_full.txt: the full list of files, hosted at the Julich HPC courtesy of the Raise CoE project, ~2.5TB total
*.cmd: the Pythia8 cards
pythia.py: the pythia steering code for Key4HEP
run_sim.sh: the steering script for generating, simulating and reconstructing a single file of 100 events from the p8_ee_tt_ecm380, p8_ee_qq_ecm380, p8_ee_ZH_Htautau, p8_ee_WW_fullhad datasets
run_sim_pu.sh: the steering script for generating, simulating and reconstructing a single file of 100 events from the p8_ee_tt_ecm380_PU10 dataset
run_sim_gun.sh: the steering script for generating the single-particle gun samples
run_sim_gun_np.sh: the steering script for generating multi-particle gun samples (extensive datasets have not yet been generated)
check_files.py: the main driver script that configures the full statistics and creates submission scripts for all the simulations
PandoraSettings.zip: the settings used for Pandora PF reconstruction
main19.cc: the Pythia8+HepMC driver code for generating the events with PU overlay
clicRec_e4h_input.py: the steering configuration of the reconstruction modules in Key4HEP
clic_steer.py: the steering configuration of the Geant4 simulation modules in Key4HEP
clic-visualize.ipynb: an example notebook demonstrating how the dataset can be loaded and events visualized in Python
visualization.mp4: an example visualization of the hits and generator particles of a single ttbar event from the dataset
Dataset semantics
Each file consists of event records. Each event contains structured branches of the relevant physics data. The branches relevant to particle flow reconstruction include:
MCParticles: the ground truth generator particles
ECALBarrel, ECALEndcap, ECALOther, HCALBarrel, HCALEndcap, HCALOther, MUON: reconstructed hits in the various calorimeter subsystems
SiTracks_Refitted: the reconstructed tracks
PandoraClusters: the calorimeter hits, clustered by Pandora to calorimeter clusters
MergedRecoParticles: the reconstructed particles from the Pandora particle flow algorithm
CalohitMCTruthLink: the links between MC particles and reconstructed calorimeter hits
SiTracksMCTruthLink: the links between MC particles and reconstructed tracks
The full details of the EDM4HEP format are available here.
Dataset characteristics
The full dataset in dataset_full.txt consists of 43 tar files of up to 100GB each. The tar files contain in total 58068 files, 2.5TB in the ROOT EDM4HEP format.
The subset in *_10files.tar for consists of 150 files, 26GB in the ROOT EDM4HEP format.
How can you use these data?
The ROOT files can be directly loaded with the uproot Python library.
Disclaimer
These are simulated samples suitable for conceptual machine learning R&D and software performance studies. They have not been calibrated with respect to real data, and should not be used to derive physics projections about the detectors.
Neither CLIC nor CERN endorse any works, scientific or otherwise, produced using these data. All releases will have a unique DOI that you are requested to cite in any applications or publications.
Facebook
TwitterU.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
The SP-1 experiment on Vega spacecraft was intended for studying the spatial and mass distributions of dust particles in the cometary coma over the mass range 1.e-16 to 1.e-6 g. Covering such a broad mass range was made possible by using sensors of two types, namely, impact plasma and acoustic.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Synopsis
Machine-learning friendly format of tracks, clusters and target particles in electron-positron events, simulated with the CLIC detector. Ready to be used with jpata/particleflow:v2.3.0. Derived from the EDM4HEP ROOT files in https://zenodo.org/record/8260741.
clic_edm_ttbar_pf.zip: e+e- -> ttbar, center of mass energy at 380 GeV
clic_edm_qq_pf.zip: e+e- -> Z* -> qqbar, center of mass energy at 380 GeV
clic_edm_ww_fullhad_pf.zip: e+e- -> WW -> W decaying hadronically, center of mass energy at 380 GeV
clic-tfds.ipynb: an example notebook on how to load the files
Contents
Each .zip file contains the dataset in the tensorflow-datasets, array_record format. We have split the full datasets into 10 subsets, due to space considerations on zenodo, two subsets from each dataset are uploaded. Each dataset contains a train and test split of events.
Dataset semantics (to be updated)
Each dataset consists of events that can be iterated over using the tensorflow-datasets library and used in either tensorflow or pytorch. Each event has the following information available:
X: the reconstruction input features, i.e. tracks and clusters
ytarget: the ground truth particles with the features ["PDG", "charge", "pt", "eta", "sin_phi", "cos_phi", "energy", "jet_idx"], with "jet_idx" corresponding to the gen-jet assignment of this particle
ycand: the baseline Pandora PF particles with the features ["PDG", "charge", "pt", "eta", "sin_phi", "cos_phi", "energy", "jet_idx"], with "jet_idx" corresponding to the gen-jet assignment of this particle
The full semantics, including the list of features for X, are available at https://github.com/jpata/particleflow/blob/v2.3.0/mlpf/heptfds/clic_pf_edm4hep/utils_edm.py and https://github.com/jpata/particleflow/blob/v2.3.0/mlpf/data/key4hep/postprocessing.py.
Facebook
TwitterWe designed two new samplers for monitoring airborne particulates, including fungal and fern spores and plant pollen, that rely on natural wind currents (Passive Environmental Sampler) or a battery operated fan (Active Environmental Sampler). Both samplers are modeled after commercial devices such as the Rotorod® and the Burkard® samplers, but are more economical and require less maintenance than commercial devices. We conducted wind tunnel comparisons of our two new samplers to Rotorod® samplers using synthetic polyethylene spheres (12 - 160 µm in diameter) to compare numbers and size range of particulates that are captured by the samplers. This dataset contains raw numbers of polyethylene spheres that were captured by the samplers during eight separate trials in a sealed room with constant recirculating air flow.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Particle Detection is a dataset for object detection tasks - it contains Particle annotations for 3,319 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
SPID is a comprehensive dataset composed of synthetic particle image velocimetry (PIV) image pairs and their corresponding exact optical flow computations. It serves as a valuable resource for researchers and practitioners in the field. The dataset is organized into three subsets: training, validation, and test, distributed in a ratio of 70%, 15%, and 15%, respectively. Each subset within SPID consists of an input denoted as "x", which comprises synthetic image pairs. These image pairs provide the necessary context for the optical flow computations. Additionally, an output termed "y" is provided, which represents the exact optical flow calculated for each image pair. Notably, the images within the dataset are single-channel, and the optical flow is decomposed into its u and v components. The shape of the input subsets in SPID is given by (number of samples, number of frames, image width, image height, number of channels), representing the dimensions of the input data. On the other hand, the shape of the output subsets is given by (number of samples, velocity components, image width, image height), denoting the shape of the optical flow data. It is important to mention that SPID dataset is a preprocessed version of the Raw Synthetic Particle Image Dataset (RSPID), ensuring improved usability and reliability. Moreover, the dataset is packaged as a NumPy compressed NPZ file, which conveniently stores the inputs and outputs as separate NumPy NPZ files with the labels train, validation and test as acess keys. This format simplifies data extraction and integration into machine learning frameworks and libraries, facilitating seamless usage of the dataset. SPID incorporates various factors that impact PIV analysis to provide a comprehensive and realistic simulation. The dataset includes image pairs with an image width of 665 pixels and an image height of 630 pixels, ensuring a high level of detail and accuracy with an 8-bit depth. It incorporates different particle radii (1, 2, 3, and 4 pixels) and particle densities (15, 17, 20, 23, 25, and 32 particles) to capture diverse particle configurations. To simulate real-world scenarios, SPID introduces displacement variations through the delta x factor, ranging from 0.05% to 0.25%. Noise levels (1, 5, 10, and 15) are also incorporated to mimic practical PIV measurements with varying degrees of noise. Furthermore, out-of-plane motion effects are considered with standard deviations of 0.01, 0.025, and 0.05 to assess their impact on optical flow accuracy. The dataset covers a wide range of flow patterns encountered in fluid dynamics. It includes Rankine uniform, Rankine vortex, parabolic, stagnation, shear, and decaying vortex flows, allowing for comprehensive testing and evaluation of PIV algorithms across different scenarios. By leveraging the SPID dataset, researchers can develop and validate PIV algorithms and techniques under various challenging conditions. Its realistic and diverse simulation of particle image velocimetry scenarios makes it an invaluable tool for advancing the field and improving the accuracy and reliability of optical flow computations.
Facebook
TwitterU.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
The SP-2 experiment on Vega spacecraft was intended for studying the spatial and mass distributions of dust particles in the cometary coma over the mass range 1.e-16 to 1.e-6 g. Covering such a broad mass range was made possible by using sensors of two types, namely, impact plasma and acoustic.
Facebook
TwitterThe dataset has been built from official ATLAS full-detector simulation, with "Higgs to tautau" events mixed with different backgrounds. The simulator has two parts. In the first, random proton-proton collisions are simulated based on the knowledge that we have accumulated on particle physics. It reproduces the random microscopic explosions resulting from the proton-proton collisions. In the second part, the resulting particles are tracked through a virtual model of the detector. The process yields simulated events with properties that mimic the statistical properties of the real events with additional information on what has happened during the collision, before particles are measured in the detector.
The signal sample contains events in which Higgs bosons (with a fixed mass of 125 GeV) were produced. The background sample was generated by other known processes that can produce events with at least one electron or muon and a hadronic tau, mimicking the signal. For the sake of simplicity, only three background processes were retained for the Challenge. The first comes from the decay of the Z boson (with a mass of 91.2 GeV) into two taus. This decay produces events with a topology very similar to that produced by the decay of a Higgs. The second set contains events with a pair of top quarks, which can have a lepton and a hadronic tau among their decay. The third set involves the decay of the W boson, where one electron or muon and a hadronic tau can appear simultaneously only through imperfections of the particle identification procedure.
Due to the complexity of the simulation process, each simulated event has a weight that is proportional to the conditional density divided by the instrumental density used by the simulator (an importance-sampling flavour), and normalised for integrated luminosity such that, in any region, the sum of the weights of events falling in the region is an unbiased estimate of the expected number of events falling in the same region during a given fixed time interval. In our case, the weights correspond to the quantity of real data taken during the year 2012. The weights are an artifact of the way the simulation works and so they are not part of the input to the classifier. For the Challenge, weights have been provided in the training set so the AMS can be properly evaluated. Weights were not provided in the qualifying set since the weight distribution of the signal and background sets are very different and so they would give away the label immediately. However, in the opendata.cern.ch dataset, weights and labels have been provided for the complete dataset.
The evaluation metric is the approximate median significance (AMS):
\[ \text{AMS} = \sqrt{2\left((s+b+b_r) \log \left(1 + \frac{s}{b + b_r}\right)-s\right)}\]
where
More precisely, let $(y_1, \ldots, y_n) \in \{\text{b},\text{s}\}^n$ be the vector of true test labels, let $(\hat{y}_1, \ldots, \hat{y}_n) \in \{\text{b},\text{s}\}^n$ be the vector of predicted (submitted) test labels, and let $(w_1, \ldots, w_n) \in {\mathbb{R}^+}^n$ be the vector of weights. Then
\[ s = \sum_{i=1}^n w_i\mathbb{1}\{y_i = \text{s}\} \mathbb{1}\{\hat{y}_i = \text{s}\} \]
and
\[ b = \sum_{i=1}^n w_i\mathbb{1}\{y_i = \text{b}\} \mathbb{1}\{\hat{y}_i = \text{s}\}, \]
where the indicator function $\mathbb{1}\{A\}$ is 1 if its argument $A$ is true and 0 otherwise.
For more information on the statistical model and the derivation of the metric, see the documentation.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Derived from https://zenodo.org/record/8260741, prepared in a machine-learning friendly TFDS format, ready to be used with https://zenodo.org/record/8397954.
clic_edm_ttbar_hits_pf10k.tar: ee -> ttbar, center of mass energy at 380 GeV, 10k events
clic_edm_qq_hits_pf10k.tar: ee -> Z* -> qqbar, center of mass energy at 380 GeV, 10k events
Contents
Each .tar file contains the dataset in the tensorflow-datasets (minimum version v4.9.1), array_record format.
Dataset semantics
Each dataset consists of events that can be iterated over using the tensorflow-datasets library in either tensorflow or pytorch. Each event has the following information available:
X: the reconstruction input features, i.e. tracks and calorimeter hits
ygen: the ground truth particles with the features ["PDG", "charge", "pt", "eta", "sin_phi", "cos_phi", "energy", "jet_idx"], with "jet_idx" corresponding to the gen-jet assignment of this particle
ycand: the baseline Pandora PF particles with the features ["PDG", "charge", "pt", "eta", "sin_phi", "cos_phi", "energy", "jet_idx"], with "jet_idx" corresponding to the gen-jet assignment of this particle
The full semantics, including the list of features for X, are available at https://github.com/jpata/particleflow/blob/v1.6/mlpf/heptfds/clic_pf_edm4hep_hits/utils_edm.py.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The IMPTOX project has received funding from the EU's H2020 framework programme for research and innovation under grant agreement n. 965173. Imptox is part of the European MNP cluster on human health.
More information about the project here.
Description: This repository includes the trained weights and a custom COCO-formatted dataset used for developing and testing a Faster R-CNN R_50_FPN_3x object detector, specifically designed to identify particles in micro-FTIR filter images.
Contents:
Weights File (neuralNetWeights_V3.pth):
Format: .pth
Description: This file contains the trained weights for a Faster R-CNN model with a ResNet-50 backbone and a Feature Pyramid Network (FPN), trained for 3x schedule. These weights are specifically tuned for detecting particles in micro-FTIR filter images.
Custom COCO Dataset (uFTIR_curated_square.v5-uftir_curated_square_2024-03-14.coco-segmentation.zip):
Format: .zip
Description: This zip archive contains a custom COCO-formatted dataset, including JPEG images and their corresponding annotation file. The dataset consists of images of micro-FTIR filters with annotated particles.
Contents:
Images: JPEG format images of micro-FTIR filters.
Annotations: A JSON file in COCO format providing detailed annotations of the particles in the images.
Management: The dataset can be managed and manipulated using the Pycocotools library, facilitating easy integration with existing COCO tools and workflows.
Applications: The provided weights and dataset are intended for researchers and practitioners in the field of microscopy and particle detection. The dataset and model can be used for further training, validation, and fine-tuning of object detection models in similar domains.
Usage Notes:
The neuralNetWeights_V3.pth file should be loaded into a PyTorch model compatible with the Faster R-CNN architecture, such as Detectron2.
The contents of uFTIR_curated_square.v5-uftir_curated_square_2024-03-14.coco-segmentation.zip should be extracted and can be used with any COCO-compatible object detection framework for training and evaluation purposes.
Code can be found on the related Github repository.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Datasets generated using Key4HEP and the CLIC detector suitable for particle flow reconstruction studies. The datasets contain generator particles, reconstructed tracks and calorimeter hits, reconstructed Pandora PF particles and their respective links in the EDM4HEP format. The following processes have been simulated: tt: ee->ttbar at 380 GeV qq: ee-> Z* -> qqbar at 380 GeV e+/e-: single electron with momentum between 1 and 100 GeV mu+/mu-: single muon with momentum between 1 and 100 GeV kaon0L: single K0L with momentum between 1 and 100 GeV neutron: single neutron with momentum between 1 and 100 GeV pi+/pi-: single charged pion with momentum between 1 and 100 GeV pi0: single neutral pion with momentum between 1 and 100 GeV gamma: single photon with momentum between 1 and 100 GeV The hard interaction has been generated with Pythia 8 (pythia.py, *.cmd), the detector simulation has been done with Geant4 (clic_steer.py), the reconstruction with Marlin interfaced via Key4HEP (clicRec_e4h_input.py), which includes PF reconstruction with Pandora (PandoraSettings.zip). The main steering scripts for generating the simulations are run_sim.sh (qq and tt) and run_sim_gun.sh (particle gun), which also contain the exact versions of the software and the detector.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Particle Detector is a dataset for object detection tasks - it contains Black Brown White annotations for 397 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Particle Detector 2 is a dataset for object detection tasks - it contains Black Brown White 8R50 annotations for 396 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterSpd stands for Scintillating Pad Detector, Pr...