21 datasets found
  1. Z

    LISA Data Challenge Sangria (LDC2a)

    • data.niaid.nih.gov
    Updated Dec 3, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Le Jeune, Maude; Babak, Stanislav (2022). LISA Data Challenge Sangria (LDC2a) [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7132177
    Explore at:
    Dataset updated
    Dec 3, 2022
    Dataset provided by
    Université Paris Cité, CNRS, Astroparticule et Cosmologie
    Authors
    Le Jeune, Maude; Babak, Stanislav
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Sangria includes two main datasets: each contains Gaussian instrumental noise and simulated waveforms from 30 million Galactic white dwarf binaries, from 17 verification Galactic binaries, and from merging massive black-hole binaries with parameters derived from an astrophysical model. The first dataset includes the full specification used to generate it: source parameters, a description of instrumental noise with the corresponding power spectral density, LISA's orbit, etc. We also release noiseless data for each type of source, for waveform validation purposes. The second dataset is blinded: the level of istrumental noise and number of sources of each type are not disclosed (except for the known parameters of the verification binaries).

    See LDC website for more details.

  2. Z

    LISA Data Challenge Spritz (LDC2b)

    • data.niaid.nih.gov
    Updated Jan 28, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Le Jeune, Maude; Babak, Stanislav; Baghi, Quentin; Bayle, Jean-Baptiste; Castelli, Eleonora; Korsakova, Natalia (2023). LISA Data Challenge Spritz (LDC2b) [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7436567
    Explore at:
    Dataset updated
    Jan 28, 2023
    Dataset provided by
    University of Glasgow
    Université Paris Cité, CNRS, Astroparticule et Cosmologie
    NASA Goddard Space Flight Center via University of Maryland, Baltimore County
    CEA Paris-Saclay University
    Authors
    Le Jeune, Maude; Babak, Stanislav; Baghi, Quentin; Bayle, Jean-Baptiste; Castelli, Eleonora; Korsakova, Natalia
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The purpose of this challenge is to address for the first time the realistic instrumental and environmental noise. We have two datasets with merging MBHBs. (i) Dataset with a loud (SNR ~2000) GW signal, lasting for about 31 days. The signal is expected to be detectable a few weeks before the merger and, therefore, is suitable for testing low-latency algorithms. We have added three short loud glitches distributed in the inspiral, late inspiral and near merger parts of the signal. (ii) Dataset with a quiet (SNR ~100) GW signal lasting for one week, with a several-hour-long glitch placed near the merger. A third 1-year long dataset contains 36 verification binaries. We have placed glitches according to a Poisson distribution with a rate of 4 glitches per day, whose model is described in the Spritz documentation.

    See LDC website for more details.

  3. LISA Traffic Light Dataset

    • kaggle.com
    zip
    Updated Feb 28, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Morten Bornø Jensen (2018). LISA Traffic Light Dataset [Dataset]. https://www.kaggle.com/mbornoe/lisa-traffic-light-dataset
    Explore at:
    zip(4520171347 bytes)Available download formats
    Dataset updated
    Feb 28, 2018
    Authors
    Morten Bornø Jensen
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    Context

    When evaluating computer vision projects, training and test data are essential. The used data is a representation of a challenge a proposed system shall solve. It is desirable to have a large database with large variation representing the challenge, e.g detecting and recognizing traffic lights (TLs) in an urban environment. From surveying existing work it is clear that currently evaluation is limited primarily to small local datasets gathered by the authors themselves and not a public available dataset. The local datasets are often small in size and contain little variation. This makes it nearly impossible to compare the work and results from different author, but it also become hard to identify the current state of a field. In order to provide a common basis for future comparison of traffic light recognition (TLR) research, an extensive public database is collected based on footage from US roads. The database consists of continuous test and training video sequences, totaling 43,007 frames and 113,888 annotated traffic lights. The sequences are captured by a stereo camera mounted on the roof of a vehicle driving under both night- and daytime with varying light and weather conditions. Only the left camera view is used in this database, so the stereo feature is in the current state not used.

    Content

    The database is collected in San Diego, California, USA. The database provides four day-time and two night-time sequences primarily used for testing, providing 23 minutes and 25 seconds of driving in Pacific Beach and La Jolla, San Diego. The stereo image pairs are acquired using the Point Grey’s Bumblebee XB3 (BBX3-13S2C-60) which contains three lenses which capture images with a resolution of 1280 x 960, each with a Field of View(FoV) of 66°. Where the left camera view is used for all the test sequences and training clips. The training clips consists of 13 daytime clips and 5 nighttime clips.

    Annotations

    The annotation.zip contains are two types of annotation present for each sequence and clip. The first annotation type contains information of the entire TL area and what state the TL is in. This annotation file is called frameAnnotationsBOX, and is generated from the second annotation file by enlarging all annotation larger than 4x4. The second one is annotation marking only the area of the traffic light which is lit and what state it is in. This second annotation file is called frameAnnotationsBULB.

    The annotations are stored as 1 annotation per line with the addition of information such as class tag and file path to individual image files. With this structure the annotations are stored in a csv file, where the structure is exemplified in below listing:

    Filename;Annotation tag;Upper left corner X;Upper left corner Y;Lower right corner X;Lower right corner Y;Origin file;Origin frame number;Origin track;Origin track frame number

    Acknowledgements

    When using this dataset we would appreciate if you cite the following papers:

    Jensen MB, Philipsen MP, Møgelmose A, Moeslund TB, Trivedi MM. Vision for Looking at Traffic Lights: Issues, Survey, and Perspectives. I E E E Transactions on Intelligent Transportation Systems. 2016 Feb 3;17(7):1800-1815. Available from, DOI: 10.1109/TITS.2015.2509509

    Philipsen, M. P., Jensen, M. B., Møgelmose, A., Moeslund, T. B., & Trivedi, M. M. (2015, September). Traffic light detection: A learning algorithm and evaluations on challenging dataset. In intelligent transportation systems (ITSC), 2015 IEEE 18th international conference on (pp. 2341-2345). IEEE.

    Bibtex

    @article{jensen2016vision,
     title={Vision for looking at traffic lights: Issues, survey, and perspectives},
     author={Jensen, Morten Born{\o} and Philipsen, Mark Philip and M{\o}gelmose, Andreas and Moeslund, Thomas Baltzer and Trivedi, Mohan Manubhai},
     journal={IEEE Transactions on Intelligent Transportation Systems},
     volume={17},
     number={7},
     pages={1800--1815},
     year={2016},
     doi={10.1109/TITS.2015.2509509},
     publisher={IEEE}
    }
    
    @inproceedings{philipsen2015traffic,
     title={Traffic light detection: A learning algorithm and evaluations on challenging dataset},
     author={Philipsen, Mark Philip and Jensen, Morten Born{\o} and M{\o}gelmose, Andreas and Moeslund, Thomas B and Trivedi, Mohan M},
     booktitle={intelligent transportation systems (ITSC), 2015 IEEE 18th international conference on},
     pages={2341--2345},
     year={2015},
     organization={IEEE}
    }
    
  4. Erebor LDC2A Training Dataset Output Catalogs

    • zenodo.org
    zip
    Updated May 7, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Michael L. Katz; Michael L. Katz; Nikolaos Karnesis; Nikolaos Karnesis; Natalia Korsakova; Natalia Korsakova; Jonathan R. Gair; Jonathan R. Gair; Stergioulas Nikolaos; Stergioulas Nikolaos (2024). Erebor LDC2A Training Dataset Output Catalogs [Dataset]. http://doi.org/10.5281/zenodo.11130700
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 7, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Michael L. Katz; Michael L. Katz; Nikolaos Karnesis; Nikolaos Karnesis; Natalia Korsakova; Natalia Korsakova; Jonathan R. Gair; Jonathan R. Gair; Stergioulas Nikolaos; Stergioulas Nikolaos
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The catalogs output from our LISA global fit pipeline called Erebor. These datasets can be read in and analyzed with lisacattools. This catalog corresponds to the training dataset from the LISA Data Challenges 2A dataset (Sangria). The zip file downlaod is ~2.9 GB. The unzipped data files amount to ~15 GB of data.

  5. l

    IEEE 2014 Data Challenge Data

    • repository.lboro.ac.uk
    7z
    Updated Oct 9, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lei Mao; Lisa Jackson (2019). IEEE 2014 Data Challenge Data [Dataset]. http://doi.org/10.17028/rd.lboro.3518141.v1
    Explore at:
    7zAvailable download formats
    Dataset updated
    Oct 9, 2019
    Dataset provided by
    Loughborough University
    Authors
    Lei Mao; Lisa Jackson
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Dataset from IEEE 2014 data challenge, detailed information about the Data Challenge is included.

  6. Erebor LDC2A Hidden Dataset Output Catalogs

    • zenodo.org
    zip
    Updated Apr 20, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Michael L. Katz; Michael L. Katz; Nikolaos Karnesis; Nikolaos Karnesis; Natalia Korsakova; Natalia Korsakova; Jonathan R. Gair; Jonathan R. Gair; Nikolaos Stergioulas; Nikolaos Stergioulas (2024). Erebor LDC2A Hidden Dataset Output Catalogs [Dataset]. http://doi.org/10.5281/zenodo.11001147
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 20, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Michael L. Katz; Michael L. Katz; Nikolaos Karnesis; Nikolaos Karnesis; Natalia Korsakova; Natalia Korsakova; Jonathan R. Gair; Jonathan R. Gair; Nikolaos Stergioulas; Nikolaos Stergioulas
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The catalogs output from our LISA global fit pipeline called Erebor. These datasets can be read in and analyzed with lisacattools. This catalog corresponds to the hidden dataset from the LISA Data Challenges 2A dataset (Sangria). The zip file downlaod is ~2.8 GB. The unzipped data files amount to ~14 GB of data.

  7. n

    Varieties of Democracy (V-Dem) Data — v.15 (2025)

    • curate.nd.edu
    • datasetcatalog.nlm.nih.gov
    bin
    Updated May 12, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Michael Coppedge; John Gerring; Carl Henrik Knutsen; Staffan I. Lindberg; Jan Teorell; david altman; Fabio Angiolillo; Michael Bernhard; Agnes Cornell; M. Steven Fish; linnea fox; Lisa Gastaldi; Haakon Gjerløw; Adam Glynn; Ana Good God; Sandra Grahn; Allen HICKEN; Katrin Kinzelbach; Joshua Krusell; Kyle L. Marquardt; Kelly McMann; Valeriya Mechkova; juraj medzihorsky; natalia natsika; Anja Neundorf; Pamela Paxton; Daniel Pemstein; Johannes von Römer; Brigitte Seim; Rachel Sigman; Svend-Erik Skaaning; Jeffrey Staton; Aksel Sundström; Marcus Tannenberg; Eitan Tzelgov; Yi-Ting Wang; Felix Wiebrecht; Tore Wig; steven lloyd wilson; Daniel Ziblatt (2025). Varieties of Democracy (V-Dem) Data — v.15 (2025) [Dataset]. http://doi.org/10.7274/28719470
    Explore at:
    binAvailable download formats
    Dataset updated
    May 12, 2025
    Dataset provided by
    University of Notre Dame
    Authors
    Michael Coppedge; John Gerring; Carl Henrik Knutsen; Staffan I. Lindberg; Jan Teorell; david altman; Fabio Angiolillo; Michael Bernhard; Agnes Cornell; M. Steven Fish; linnea fox; Lisa Gastaldi; Haakon Gjerløw; Adam Glynn; Ana Good God; Sandra Grahn; Allen HICKEN; Katrin Kinzelbach; Joshua Krusell; Kyle L. Marquardt; Kelly McMann; Valeriya Mechkova; juraj medzihorsky; natalia natsika; Anja Neundorf; Pamela Paxton; Daniel Pemstein; Johannes von Römer; Brigitte Seim; Rachel Sigman; Svend-Erik Skaaning; Jeffrey Staton; Aksel Sundström; Marcus Tannenberg; Eitan Tzelgov; Yi-Ting Wang; Felix Wiebrecht; Tore Wig; steven lloyd wilson; Daniel Ziblatt
    License

    https://www.law.cornell.edu/uscode/text/17/106https://www.law.cornell.edu/uscode/text/17/106

    Description

    Collected data sets from March 2025, Varieties of Democracy, version 15.Varieties of Democracy (V-Dem) seeks to capture seven different conceptions of democracy—participatory, consensual, majoritarian, deliberative, and egalitarian, in addition to the more familiar electoral and liberal democracy. Varieties of Democracy 15 produces the largest global dataset on democracy with over 31 million data points for 202 countries from 1789 to 2024. Involving over 4,200 scholars and other country experts, V-Dem measures over 600 different attributes of democracy. The reliable, precise nature of the indicators as well as their lengthy historical coverage is useful to scholars studying why democracy succeeds or fails and how it affects human development, as well as to governments and NGOs wishing to evaluate efforts to promote democracy. V-Dem makes the improved indicators freely available for use by researchers, NGOs, international organizations, activists, and journalists. More information about V-Dem is available at v-dem.net, including visualization interfaces for data from 202 countries and the complete 2025 dataset for download.The V-Dem Collection contains coder-level data and uncertainty estimates for all of the Variety of Democracy Datasets.

  8. h

    Draft Report: Research Data Management Support in the Humanities: Challenges...

    • hsscommons.ca
    • hsscommons.rs-dev.uvic.ca
    Updated Apr 11, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Caroline Winter (2024). Draft Report: Research Data Management Support in the Humanities: Challenges and Recommendations [Dataset]. http://doi.org/10.25547/S68E-J844
    Explore at:
    Dataset updated
    Apr 11, 2024
    Dataset provided by
    Canadian HSS Commons
    Authors
    Caroline Winter
    Description

    The following draft report by Stefan Higgins, Lisa Goddard, and Shahira Khair outlines discussions and findings from Research Data Management for Digitally-Curious Humanists, an online event sponsored by the Social Sciences and Humanities Research Council (SSHRC) and held on June 14, 2021 as a Digital Humanities Summer Institute (DHSI) 2021 –– Online Edition aligned event.

  9. Explanation of variables and data sources.

    • plos.figshare.com
    xls
    Updated Jan 27, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yongyu Feng; Huimin Wang; Jing Wu; Yan Wang; Hui Shi; Jun Zhao (2025). Explanation of variables and data sources. [Dataset]. http://doi.org/10.1371/journal.pone.0317659.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jan 27, 2025
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Yongyu Feng; Huimin Wang; Jing Wu; Yan Wang; Hui Shi; Jun Zhao
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The increasing population density and impervious surface area have exacerbated the urban heat island effect, posing significant challenges to urban environments and sustainable development. Urban spatial morphology is crucial in mitigating the urban heat island effect. This study investigated the impact of urban spatial morphology on land surface temperature (LST) at the township scale. We proposed a six-dimensional factor system to describe urban spatial morphology, comprising Atmospheric Quality, Remote Sensing Indicators, Terrain, Land Use/Land Cover, Building Scale, and Socioeconomic Factors. Spatial autocorrelation and spatial regression methods were used to analyze the impact. To this end, the township-scale data of Linyi City from 2013 to 2022 were collected. The results showed that LST are significantly influenced by urban spatial morphology, with the strongest correlations found in the factors of land use types, landscape metrics, and remote sensing indices. The global Moran’s I value of LST exceeds 0.7, indicating a strong positive spatial correlation. The High-High LISA values are distributed in the central and western areas, and the Low-Low LISA values are found in the northern regions and some scattered counties. The Geographically Weighted Regression (GWR) model outperforms the Spatial Error Model (SEM) and Ordinary Least Squares (OLS) model, making it more suitable for exploring these relationships. The findings aim to provide valuable references for town planning, resource allocation, and sustainable development.

  10. Training Data for Gollum in Higgs Uncertainty Challenge

    • zenodo.org
    bin
    Updated May 2, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lisa Benato; Lisa Benato; Cristina Giordano; Cristina Giordano; Claudius Krause; Claudius Krause; Ang Li; Ang Li; Robert Schoefbeck; Robert Schoefbeck; Dennis Schwarz; Dennis Schwarz; Maryam Shooshtari; Maryam Shooshtari; Daohan Wang; Daohan Wang (2025). Training Data for Gollum in Higgs Uncertainty Challenge [Dataset]. http://doi.org/10.5281/zenodo.15322773
    Explore at:
    binAvailable download formats
    Dataset updated
    May 2, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Lisa Benato; Lisa Benato; Cristina Giordano; Cristina Giordano; Claudius Krause; Claudius Krause; Ang Li; Ang Li; Robert Schoefbeck; Robert Schoefbeck; Dennis Schwarz; Dennis Schwarz; Maryam Shooshtari; Maryam Shooshtari; Daohan Wang; Daohan Wang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This is a subset of the training data for "Gollum", Team HEPHY's submission to the FAIR Universe Higgs Uncertainty Challenge. For the full training, we used systematic variations up to 3 standard deviations. All of these can be generated from the nominal file (provided by the challenge) and the code that the challenge provided.

    Data was taken from CodeBench: https://www.codabench.org/datasets/download/b9e59d0a-4db3-4da4-b1f8-3f609d1835b2/, and systematic variations were applied according to the description https://arxiv.org/abs/2410.02867.

    Gollum Code: https://github.com/HephyAnalysisSW/GOLLUM

    Gollum Publication: TBA

    FAIR Univerese Challenge: https://www.codabench.org/competitions/2977, https://github.com/FAIR-Universe/HEP-Challenge/tree/master/

    The files are:

    • nominal.h5 is a dataset without systematic variations
    • met, jes, tes in the file name indicates a variation of MET (missing energy), jet energy scale, or tau energy scale to the value that follows. For example: the file tes_0p99_jes_0p99.h5 includes events where the tau and jet energy scales were both multiplied with a factor of 0.99, corresponding to nuicances parameters with the value of -1.
    • Normalization-type uncertainties are not included, as these samples can be obtained by changing the corresponding event weights.

    The file toy_mu_2.h5 is a toy dataset with true value of the signal strength mu set to 2. The detailed values of the nuisance parameters (in the format (1+alpha), see the documentation of the Challenge) are stored in a dictionary that can be found in toy_mu_2.pkl:


    import pickle

    myfile = open('toy_mu_2.pkl', 'rb')
    nuisances = pickle.load(myfile)
    print(nuisances)

    >>> {'mu': 2.0, 'tes': 0.9945927027846702, 'jes': 1.0094985245559578, 'soft_met': 0.16120480304203255, 'ttbar_scale': 0.9866736788084463, 'diboson_scale': 1.0610956857813547, 'bkg_scale': 0.999100305445466}


  11. d

    Data from: Predicting amphibian intraspecific diversity with machine...

    • datadryad.org
    • data.niaid.nih.gov
    • +1more
    zip
    Updated Nov 12, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lisa Barrow (2020). Predicting amphibian intraspecific diversity with machine learning: Challenges and prospects for integrating traits, geography, and genetic data [Dataset]. http://doi.org/10.5061/dryad.0cfxpnvzh
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 12, 2020
    Dataset provided by
    Dryad
    Authors
    Lisa Barrow
    Time period covered
    Nov 11, 2020
    Description

    Data were compiled from open-access databases and were processed using a series of python and R scripts (included with the Dryad package).

  12. Advanced inertial sensors for fundamental physics and gravitational wave...

    • data.nasa.gov
    application/rdfxml +5
    Updated Sep 7, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2018). Advanced inertial sensors for fundamental physics and gravitational wave astrophysics [Dataset]. https://data.nasa.gov/dataset/Advanced-inertial-sensors-for-fundamental-physics-/3fih-iu77
    Explore at:
    tsv, csv, xml, application/rdfxml, application/rssxml, jsonAvailable download formats
    Dataset updated
    Sep 7, 2018
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Description

    Gravitational wave detection is one of the most compelling areas of observational astrophysics today. It represents an entirely new way of observing our universe and therefore provides enormous potential for scientific discovery. Low frequency gravitational waves in the 0.1 mHz to 1 Hz band, which can only be observed from space, provide the richest science and complement high frequency observatories on the ground such as LIGO. A space-based observatory will improve our understanding of the formation and growth of massive black holes, create a census of compact binary systems in the Milky Way, test general relativity in extreme conditions, and enable searches for new physics. All space-based gravitational wave observatories, like the Laser Interferometer Space Antenna (LISA), require free falling masses where all classical external forces are reduced below the tidal forces of the oscillating spacetime metric. No other concept has been found that would allow gravitational waves to be measured in the LISA frequency band. LISA Pathfinder (LPF), an ESA technology mission, will test key technologies for LISA and is scheduled for launch in July of 2015. However, the design of the LPF inertial sensor was solidified a decade ago. Since that time, new component technologies have been developed that can improve the sensor's acceleration noise performance and/or reduce complexity and technological risk. They are (a) alternate test mass and electrode housing coatings that can greatly simplify the caging system, (b) a new, lower cost and higher efficiency charge control system utilizing semiconductor UV emitters instead of Hg vapor lamps, and (c) operational modes of the inertial sensor that can mitigate the effects of higher than expected noise in the driving electronics and potential on-orbit thruster failures. These technologies involve three components of the LPF inertial sensor that represented the biggest technical challenges for the mission and were responsible, in part, for the delayed launch date. Evolved LISA, or eLISA, the European-led gravitational wave observatory, was recently selected as ESA's L3 large mission, with a launch in the 2030s. ESA will allow a 20% NASA contribution to eLISA and NASA has expressed strong interest to participate. Some of the technologies proposed here have already been identified as potential NASA contributions to eLISA by the European eLISA Consortium and by NASA itself. On the other hand, the imminent direct detection of gravitational waves by Advanced LIGO and by Pulsar Timing Arrays will ignite the era of gravitational wave observation. The distant launch data of eLISA, following two other large missions, L1 and L2, which could further delay L3, means that LISA has to be seen as one of the favorites for the next decadal survey. If this is the case, then NASA will not want to completely depend on a single, foreign vendor for the mission-critical inertial sensor, and instead will want to develop U.S. expertise and a U.S. vendor base for this technology to reduce programmatic risks. This Concept Study provides an excellent opportunity for NASA to examine how new ideas and technologies can be integrated into an advanced inertial sensor for gravitational wave astrophysics. During the Concept Study a modified inertial sensor, incorporating these technologies will be designed and undergo initial proof-of-concept testing in an existing torsion pendulum facility at the University of Florida. The subsequent Development Effort will first optimize the sensor design, taking advantage of lessons learned during the laboratory testing and possibly from the on-orbit performance of LISA Pathfinder. Then a prototype instrument will be fabricated and prepared for more rigorous testing on ground and on sub-orbital flights and/or on the International Space Station.

  13. Supplementary data to this article can be found in supplementary material.

    • plos.figshare.com
    zip
    Updated Jan 27, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yongyu Feng; Huimin Wang; Jing Wu; Yan Wang; Hui Shi; Jun Zhao (2025). Supplementary data to this article can be found in supplementary material. [Dataset]. http://doi.org/10.1371/journal.pone.0317659.s001
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 27, 2025
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Yongyu Feng; Huimin Wang; Jing Wu; Yan Wang; Hui Shi; Jun Zhao
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Supplementary data to this article can be found in supplementary material.

  14. D

    Data from: 3D Scholarly Digital Editions: Requirements And Challenges For...

    • dataverse.nl
    pdf
    Updated Dec 17, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Angel David Nieves; Susan Schreibman; Costas Papadopoulos; Lisa Snyder; Angel David Nieves; Susan Schreibman; Costas Papadopoulos; Lisa Snyder (2019). 3D Scholarly Digital Editions: Requirements And Challenges For New Publication Models [Dataset]. http://doi.org/10.34894/ZH5SBH
    Explore at:
    pdf(176850)Available download formats
    Dataset updated
    Dec 17, 2019
    Dataset provided by
    DataverseNL
    Authors
    Angel David Nieves; Susan Schreibman; Costas Papadopoulos; Lisa Snyder; Angel David Nieves; Susan Schreibman; Costas Papadopoulos; Lisa Snyder
    License

    https://dataverse.nl/api/datasets/:persistentId/versions/2.0/customlicense?persistentId=doi:10.34894/ZH5SBHhttps://dataverse.nl/api/datasets/:persistentId/versions/2.0/customlicense?persistentId=doi:10.34894/ZH5SBH

    Description

    Abstract of paper 1003 presented at the Digital Humanities Conference 2019 (DH2019), Utrecht , the Netherlands 9-12 July, 2019.

  15. DataSheet1_Development of the United States Environmental Protection...

    • frontiersin.figshare.com
    pdf
    Updated Jun 3, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lisa Baxter; Jeremy Baynes; Anne Weaver; Anne Neale; Timothy Wade; Megan Mehaffey; Danelle Lobdell; Kelly Widener; Wayne Cascio (2023). DataSheet1_Development of the United States Environmental Protection Agency’s Facilities Status Dashboard for the COVID-19 Pandemic: Approach and Challenges.PDF [Dataset]. http://doi.org/10.3389/ijph.2022.1604761.s001
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jun 3, 2023
    Dataset provided by
    Frontiers Mediahttp://www.frontiersin.org/
    Authors
    Lisa Baxter; Jeremy Baynes; Anne Weaver; Anne Neale; Timothy Wade; Megan Mehaffey; Danelle Lobdell; Kelly Widener; Wayne Cascio
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    United States
    Description

    Objectives: Develop a tool for applying various COVID-19 re-opening guidelines to the more than 120 U.S. Environmental Protection Agency (EPA) facilities.Methods: A geographic information system boundary was created for each EPA facility encompassing the county where the EPA facility is located and the counties where employees commuted from. This commuting area is used for display in the Dashboard and to summarize population and COVID-19 health data for analysis.Results: Scientists in EPA’s Office of Research and Development developed the EPA Facility Status Dashboard, an easy-to-use web application that displays data and statistical analyses on COVID-19 cases, testing, hospitalizations, and vaccination rates.Conclusion: The Dashboard was designed to provide readily accessible information for EPA management and staff to view and understand the COVID-19 risk surrounding each facility. It has been modified several times based on user feedback, availability of new data sources, and updated guidance. The views expressed in this article are those of the authors and do not necessarily represent the views or the policies of the U.S. Environmental Protection Agency.

  16. Z

    Data from: FISBe: A real-world benchmark dataset for instance segmentation...

    • data.niaid.nih.gov
    • data-staging.niaid.nih.gov
    • +1more
    Updated Apr 2, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mais, Lisa; Hirsch, Peter; Managan, Claire; Kandarpa, Ramya; Rumberger, Josef Lorenz; Reinke, Annika; Maier-Hein, Lena; Ihrke, Gudrun; Kainmueller, Dagmar (2024). FISBe: A real-world benchmark dataset for instance segmentation of long-range thin filamentous structures [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_10875062
    Explore at:
    Dataset updated
    Apr 2, 2024
    Dataset provided by
    German Cancer Research Center
    Howard Hughes Medical Institute - Janelia Research Campus
    Max Delbrück Center
    Max Delbrück Center for Molecular Medicine
    Authors
    Mais, Lisa; Hirsch, Peter; Managan, Claire; Kandarpa, Ramya; Rumberger, Josef Lorenz; Reinke, Annika; Maier-Hein, Lena; Ihrke, Gudrun; Kainmueller, Dagmar
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    General

    For more details and the most up-to-date information please consult our project page: https://kainmueller-lab.github.io/fisbe.

    Summary

    A new dataset for neuron instance segmentation in 3d multicolor light microscopy data of fruit fly brains

    30 completely labeled (segmented) images

    71 partly labeled images

    altogether comprising ∼600 expert-labeled neuron instances (labeling a single neuron takes between 30-60 min on average, yet a difficult one can take up to 4 hours)

    To the best of our knowledge, the first real-world benchmark dataset for instance segmentation of long thin filamentous objects

    A set of metrics and a novel ranking score for respective meaningful method benchmarking

    An evaluation of three baseline methods in terms of the above metrics and score

    Abstract

    Instance segmentation of neurons in volumetric light microscopy images of nervous systems enables groundbreaking research in neuroscience by facilitating joint functional and morphological analyses of neural circuits at cellular resolution. Yet said multi-neuron light microscopy data exhibits extremely challenging properties for the task of instance segmentation: Individual neurons have long-ranging, thin filamentous and widely branching morphologies, multiple neurons are tightly inter-weaved, and partial volume effects, uneven illumination and noise inherent to light microscopy severely impede local disentangling as well as long-range tracing of individual neurons. These properties reflect a current key challenge in machine learning research, namely to effectively capture long-range dependencies in the data. While respective methodological research is buzzing, to date methods are typically benchmarked on synthetic datasets. To address this gap, we release the FlyLight Instance Segmentation Benchmark (FISBe) dataset, the first publicly available multi-neuron light microscopy dataset with pixel-wise annotations. In addition, we define a set of instance segmentation metrics for benchmarking that we designed to be meaningful with regard to downstream analyses. Lastly, we provide three baselines to kick off a competition that we envision to both advance the field of machine learning regarding methodology for capturing long-range data dependencies, and facilitate scientific discovery in basic neuroscience.

    Dataset documentation:

    We provide a detailed documentation of our dataset, following the Datasheet for Datasets questionnaire:

    FISBe Datasheet

    Our dataset originates from the FlyLight project, where the authors released a large image collection of nervous systems of ~74,000 flies, available for download under CC BY 4.0 license.

    Files

    fisbe_v1.0_{completely,partly}.zip

    contains the image and ground truth segmentation data; there is one zarr file per sample, see below for more information on how to access zarr files.

    fisbe_v1.0_mips.zip

    maximum intensity projections of all samples, for convenience.

    sample_list_per_split.txt

    a simple list of all samples and the subset they are in, for convenience.

    view_data.py

    a simple python script to visualize samples, see below for more information on how to use it.

    dim_neurons_val_and_test_sets.json

    a list of instance ids per sample that are considered to be of low intensity/dim; can be used for extended evaluation.

    Readme.md

    general information

    How to work with the image files

    Each sample consists of a single 3d MCFO image of neurons of the fruit fly.For each image, we provide a pixel-wise instance segmentation for all separable neurons.Each sample is stored as a separate zarr file (zarr is a file storage format for chunked, compressed, N-dimensional arrays based on an open-source specification.").The image data ("raw") and the segmentation ("gt_instances") are stored as two arrays within a single zarr file.The segmentation mask for each neuron is stored in a separate channel.The order of dimensions is CZYX.

    We recommend to work in a virtual environment, e.g., by using conda:

    conda create -y -n flylight-env -c conda-forge python=3.9conda activate flylight-env

    How to open zarr files

    Install the python zarr package:

    pip install zarr

    Opened a zarr file with:

    import zarrraw = zarr.open(, mode='r', path="volumes/raw")seg = zarr.open(, mode='r', path="volumes/gt_instances")

    optional:import numpy as npraw_np = np.array(raw)

    Zarr arrays are read lazily on-demand.Many functions that expect numpy arrays also work with zarr arrays.Optionally, the arrays can also explicitly be converted to numpy arrays.

    How to view zarr image files

    We recommend to use napari to view the image data.

    Install napari:

    pip install "napari[all]"

    Save the following Python script:

    import zarr, sys, napari

    raw = zarr.load(sys.argv[1], mode='r', path="volumes/raw")gts = zarr.load(sys.argv[1], mode='r', path="volumes/gt_instances")

    viewer = napari.Viewer(ndisplay=3)for idx, gt in enumerate(gts): viewer.add_labels( gt, rendering='translucent', blending='additive', name=f'gt_{idx}')viewer.add_image(raw[0], colormap="red", name='raw_r', blending='additive')viewer.add_image(raw[1], colormap="green", name='raw_g', blending='additive')viewer.add_image(raw[2], colormap="blue", name='raw_b', blending='additive')napari.run()

    Execute:

    python view_data.py /R9F03-20181030_62_B5.zarr

    Metrics

    S: Average of avF1 and C

    avF1: Average F1 Score

    C: Average ground truth coverage

    clDice_TP: Average true positives clDice

    FS: Number of false splits

    FM: Number of false merges

    tp: Relative number of true positives

    For more information on our selected metrics and formal definitions please see our paper.

    Baseline

    To showcase the FISBe dataset together with our selection of metrics, we provide evaluation results for three baseline methods, namely PatchPerPix (ppp), Flood Filling Networks (FFN) and a non-learnt application-specific color clustering from Duan et al..For detailed information on the methods and the quantitative results please see our paper.

    License

    The FlyLight Instance Segmentation Benchmark (FISBe) dataset is licensed under the Creative Commons Attribution 4.0 International (CC BY 4.0) license.

    Citation

    If you use FISBe in your research, please use the following BibTeX entry:

    @misc{mais2024fisbe, title = {FISBe: A real-world benchmark dataset for instance segmentation of long-range thin filamentous structures}, author = {Lisa Mais and Peter Hirsch and Claire Managan and Ramya Kandarpa and Josef Lorenz Rumberger and Annika Reinke and Lena Maier-Hein and Gudrun Ihrke and Dagmar Kainmueller}, year = 2024, eprint = {2404.00130}, archivePrefix ={arXiv}, primaryClass = {cs.CV} }

    Acknowledgments

    We thank Aljoscha Nern for providing unpublished MCFO images as well as Geoffrey W. Meissner and the entire FlyLight Project Team for valuablediscussions.P.H., L.M. and D.K. were supported by the HHMI Janelia Visiting Scientist Program.This work was co-funded by Helmholtz Imaging.

    Changelog

    There have been no changes to the dataset so far.All future change will be listed on the changelog page.

    Contributing

    If you would like to contribute, have encountered any issues or have any suggestions, please open an issue for the FISBe dataset in the accompanying github repository.

    All contributions are welcome!

  17. n

    Data from: Domain-specific neural networks improve automated bird sound...

    • data.niaid.nih.gov
    • datadryad.org
    zip
    Updated Sep 28, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Patrik Lauha; Panu Somervuo; Petteri Lehikoinen; Lisa Geres; Tobias Richter; Sebastian Seibold; Otso Ovaskainen (2022). Domain-specific neural networks improve automated bird sound recognition already with small amount of local data [Dataset]. http://doi.org/10.5061/dryad.2bvq83btd
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 28, 2022
    Dataset provided by
    University of Jyväskylä
    University of Helsinki
    Technical University of Munich
    Goethe University Frankfurt
    Authors
    Patrik Lauha; Panu Somervuo; Petteri Lehikoinen; Lisa Geres; Tobias Richter; Sebastian Seibold; Otso Ovaskainen
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    An automatic bird sound recognition system is a useful tool for collecting data of different bird species for ecological analysis. Together with autonomous recording units (ARUs), such a system provides a possibility to collect bird observations on a scale that no human observer could ever match. During the last decades progress has been made in the field of automatic bird sound recognition, but recognizing bird species from untargeted soundscape recordings remains a challenge. In this article we demonstrate the workflow for building a global identification model and adjusting it to perform well on the data of autonomous recorders from a specific region. We show how data augmentation and a combination of global and local data can be used to train a convolutional neural network to classify vocalizations of 101 bird species. We construct a model and train it with a global data set to obtain a base model. The base model is then fine-tuned with local data from Southern Finland in order to adapt it to the sound environment of a specific location and tested with two data sets: one originating from the same Southern Finnish region and another originating from a different region in German Alps. Our results suggest that fine-tuning with local data significantly improves the network performance. Classification accuracy was improved for test recordings from the same area as the local training data (Southern Finland) but not for recordings from a different region (German Alps). Data augmentation enables training with a limited number of training data and even with few local data samples significant improvement over the base model can be achieved. Our model outperforms the current state-of-the-art tool for automatic bird sound classification. Using local data to adjust the recognition model for the target domain leads to improvement over general non-tailored solutions. The process introduced in this article can be applied to build a fine-tuned bird sound classification model for a specific environment. Methods This repository contains data and recognition models described in paper Domain-specific neural networks improve automated bird sound recognition already with small amount of local data. (Lauha et al., 2022).

  18. Three Thousand Dishes on a Georgian Table, 1788-1813 - Dataset

    • zenodo.org
    • data.niaid.nih.gov
    Updated Jul 11, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Adam Crymble; Adam Crymble; Sarah Fox; Sarah Fox; Rachel Rich; Rachel Rich; Lisa Smith; Lisa Smith (2024). Three Thousand Dishes on a Georgian Table, 1788-1813 - Dataset [Dataset]. http://doi.org/10.5281/zenodo.8070132
    Explore at:
    Dataset updated
    Jul 11, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Adam Crymble; Adam Crymble; Sarah Fox; Sarah Fox; Rachel Rich; Rachel Rich; Lisa Smith; Lisa Smith
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset makes accessible the uniquely comprehensive dining records of King George III, Queen Charlotte, the Prince Regent, and their households during the declining years of the King (1788-1813). It includes a transcription and structuring of two volumes that outline day-by-day what members of the royal households were fed at each meal. It includes details of more than 40,000 plates of food from over 3,000 distinct recipes, recorded by the kitchen staff as part of their financial accountability, but offering historians a rich glimpse into the culture and consumption of two royal palaces, as well as the rich multicultural influences on eighteenth century cooking.

    The dataset includes a spreadsheet of all dishes, and an accompanying appendix that outlines our methodology and some suggestions for using the material effectively.

    Additional context and information about the records is available in: Adam Crymble; Sarah Fox; Rachel Rich; Lisa Smith, ‘Three Thousand Dishes on a Georgian Table: The Data of Royal Eating in England, 1788-1813’, Food & History, vol 21, no 2 (2023).

    License

    We release the following documents under a creative commons ‘CC-BY 4.0’ license

    ● Appendices

    ● Three Thousand Dishes on a Georgian Table - Dataset, Version 1.0 (.xlsx file)

    © Images of the 'Kew Ledger' reproduced by permission of The National Archives, London, England. These images are NOT released under a CC license. Until 2027, copies of the 'Kew Ledger' images can be found at https://www.migrants.adamcrymble.org/menus/, after which our license to share them will expire. The National Archives give no warranty as to the accuracy, completeness or fitness for the purpose of the information provided. Images may be used only for purposes of research, private study or education. Applications for any other use should be made to The National Archives Image Library, Kew, Richmond, Surrey TW9 4DU, Tel: 020 8392 5225 Fax: 020 8392 5266.

    Images of the menu book of the Prince Regent are available via the Royal Archives 'Georgian Papers' website: https://gpp.rct.uk/Record.aspx?src=CalmView.Catalog&id=GEO_MENUS%2f1.

    Citation

    Anyone publishing academically or commercially based on research conducted with this dataset in whole or in part is asked to credit the authors with the following citation:

    • Adam Crymble; Sarah Fox; Rachel Rich; Lisa Smith, ‘Three Thousand Dishes on a Georgian Table: The Data of Royal Eating in England, 1788-1813’, Food & History, vol 21, no. 2 (2023).

    This dataset is provided as-is. Anyone conducting research or making use of the dataset for any purpose whatsoever is encouraged to use their own judgment when deciding whether or not to accept the interpretations of the authors.

    Data creation occurred between 2015 and 2023.

    These data were compiled with the financial support of the British Academy "Tackling the UK’s International Challenges Programme 2019" (IC4/100235) and of the School of Cultural Studies and Humanities, Leeds Beckett University.

    The original materials are held as the ‘Kew Ledger’ at the National Archives in the United Kingdom:

    • ‘Kew Ledger’ The National Archives (TNA), LS/9/226.

    The original materials are held as the ‘Menu Book for the Prince Regent and his Household, principally relating to Carlton House, 1812-1813’ at the Royal Archives in the United Kingdom:

    • ‘Menu book for the Prince Regent and his Household, principally relating to Carlton House’ The Royal Archives, MRH/MRHF/MENUS/MAIN/MIXED/1.

  19. Study data collected, timing of data collection points, and purpose of data...

    • plos.figshare.com
    xls
    Updated Sep 13, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Den-Ching A. Lee; Michele Callisaya; Claudia Meyer; Morag E. Taylor; Katherine Lawler; Pazit Levinger; Susan Hunter; Dawn Mackey; Elissa Burton; Natasha Brusco; Terry P. Haines; Christina Ekegren; Amelia Crabtree; Lisa Licciardi; Keith D. Hill (2024). Study data collected, timing of data collection points, and purpose of data item. [Dataset]. http://doi.org/10.1371/journal.pone.0307018.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Sep 13, 2024
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Den-Ching A. Lee; Michele Callisaya; Claudia Meyer; Morag E. Taylor; Katherine Lawler; Pazit Levinger; Susan Hunter; Dawn Mackey; Elissa Burton; Natasha Brusco; Terry P. Haines; Christina Ekegren; Amelia Crabtree; Lisa Licciardi; Keith D. Hill
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Study data collected, timing of data collection points, and purpose of data item.

  20. Cyberinfrastructure Research Data Management (RDM) Conference 2025...

    • figshare.com
    pdf
    Updated Aug 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Halle Gray; Denise Davis; Tabitha Samuel; Wesley Brashear; Rachel Fleming; slivey@ncsu.edu slivey@ncsu.edu; Mary Ellen Sloane; Richard Gerber; Anas AlSobeh; Mickey Slimp; Fidelis Ngang; Surada Suwansathit; Sarah Janes; Gary Rogers; Sheila Rabun; Boyd Wilson; Stacie Powell; Soma Mukherjee; Stephen Miller; Pearl Go; Lizely Madrigal; Dhruva Chakravorty; Lisa Perez (2025). Cyberinfrastructure Research Data Management (RDM) Conference 2025 Presentations [Dataset]. http://doi.org/10.6084/m9.figshare.29963615.v1
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Aug 22, 2025
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Halle Gray; Denise Davis; Tabitha Samuel; Wesley Brashear; Rachel Fleming; slivey@ncsu.edu slivey@ncsu.edu; Mary Ellen Sloane; Richard Gerber; Anas AlSobeh; Mickey Slimp; Fidelis Ngang; Surada Suwansathit; Sarah Janes; Gary Rogers; Sheila Rabun; Boyd Wilson; Stacie Powell; Soma Mukherjee; Stephen Miller; Pearl Go; Lizely Madrigal; Dhruva Chakravorty; Lisa Perez
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Presentations from the Cyberinfrastructure Research Data Management (RDM) Conference 2025. These presentations highlight how Research Data Management practices complement the use of advanced Cyberinfrastructure technologies in research and academic programs. Here, we hope to define the scope of the effort on two major fronts: (i) how institutions are governing RDM policies; and (ii) mechanisms to integrate RDM into the research and curricular ecosystems. Challenges, opportunities, and strategies are discussed using use-cases here.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Le Jeune, Maude; Babak, Stanislav (2022). LISA Data Challenge Sangria (LDC2a) [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7132177

LISA Data Challenge Sangria (LDC2a)

Explore at:
Dataset updated
Dec 3, 2022
Dataset provided by
Université Paris Cité, CNRS, Astroparticule et Cosmologie
Authors
Le Jeune, Maude; Babak, Stanislav
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

Sangria includes two main datasets: each contains Gaussian instrumental noise and simulated waveforms from 30 million Galactic white dwarf binaries, from 17 verification Galactic binaries, and from merging massive black-hole binaries with parameters derived from an astrophysical model. The first dataset includes the full specification used to generate it: source parameters, a description of instrumental noise with the corresponding power spectral density, LISA's orbit, etc. We also release noiseless data for each type of source, for waveform validation purposes. The second dataset is blinded: the level of istrumental noise and number of sources of each type are not disclosed (except for the known parameters of the verification binaries).

See LDC website for more details.

Search
Clear search
Close search
Google apps
Main menu