66 datasets found
  1. f

    Description of data in test_set.csv.

    • plos.figshare.com
    xls
    Updated Apr 1, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Michael Joannou; Pia Rotshtein; Uta Noppeney (2024). Description of data in test_set.csv. [Dataset]. http://doi.org/10.1371/journal.pone.0301098.t004
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Apr 1, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Michael Joannou; Pia Rotshtein; Uta Noppeney
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    We present Audiovisual Moments in Time (AVMIT), a large-scale dataset of audiovisual action events. In an extensive annotation task 11 participants labelled a subset of 3-second audiovisual videos from the Moments in Time dataset (MIT). For each trial, participants assessed whether the labelled audiovisual action event was present and whether it was the most prominent feature of the video. The dataset includes the annotation of 57,177 audiovisual videos, each independently evaluated by 3 of 11 trained participants. From this initial collection, we created a curated test set of 16 distinct action classes, with 60 videos each (960 videos). We also offer 2 sets of pre-computed audiovisual feature embeddings, using VGGish/YamNet for audio data and VGG16/EfficientNetB0 for visual data, thereby lowering the barrier to entry for audiovisual DNN research. We explored the advantages of AVMIT annotations and feature embeddings to improve performance on audiovisual event recognition. A series of 6 Recurrent Neural Networks (RNNs) were trained on either AVMIT-filtered audiovisual events or modality-agnostic events from MIT, and then tested on our audiovisual test set. In all RNNs, top 1 accuracy was increased by 2.71-5.94% by training exclusively on audiovisual events, even outweighing a three-fold increase in training data. Additionally, we introduce the Supervised Audiovisual Correspondence (SAVC) task whereby a classifier must discern whether audio and visual streams correspond to the same action label. We trained 6 RNNs on the SAVC task, with or without AVMIT-filtering, to explore whether AVMIT is helpful for cross-modal learning. In all RNNs, accuracy improved by 2.09-19.16% with AVMIT-filtered data. We anticipate that the newly annotated AVMIT dataset will serve as a valuable resource for research and comparative experiments involving computational models and human participants, specifically when addressing research questions where audiovisual correspondence is of critical importance.

  2. h

    moments-in-time

    • huggingface.co
    Updated Jul 17, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Santiago Castro (2024). moments-in-time [Dataset]. https://huggingface.co/datasets/bryant1410/moments-in-time
    Explore at:
    Dataset updated
    Jul 17, 2024
    Authors
    Santiago Castro
    Description

    bryant1410/moments-in-time dataset hosted on Hugging Face and contributed by the HF Datasets community

  3. J

    Transitions at Different Moments in Time: A Spatial Probit Approach...

    • journaldata.zbw.eu
    Updated Nov 22, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    J. Paul Elhorst; Pim Heijnen; Anna Samarina; Jan Jacobs; J. Paul Elhorst; Pim Heijnen; Anna Samarina; Jan Jacobs (2022). Transitions at Different Moments in Time: A Spatial Probit Approach (replication data) [Dataset]. https://journaldata.zbw.eu/dataset/transitions-at-different-moments-in-time-a-spatial-probit-approach?activity_id=409ee6bb-4acc-464b-b4f5-443ba5485940
    Explore at:
    Dataset updated
    Nov 22, 2022
    Dataset provided by
    ZBW - Leibniz Informationszentrum Wirtschaft
    Authors
    J. Paul Elhorst; Pim Heijnen; Anna Samarina; Jan Jacobs; J. Paul Elhorst; Pim Heijnen; Anna Samarina; Jan Jacobs
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This paper adopts a spatial probit approach to explain interaction effects among cross-sectional units when the dependent variable takes the form of a binary response variable and transitions from state 0 to 1 occur at different moments in time. The model has two spatially lagged variables: one for units that are still in state 0 and one for units that had already transferred to state 1. The parameters are estimated on observations for those units that are still in state 0 at the start of the different time periods, whereas observations on units after they transferred to state 1 are discarded, just as in the literature on duration modeling. Furthermore, neighboring units that had not yet transferred may have a different impact from units that had already transferred. We illustrate our approach with an empirical study of the adoption of inflation targeting for a sample of 58 countries over the period 1985-2008.

  4. Data from: Modeling short visual events through the BOLD Moments video fMRI...

    • openneuro.org
    Updated Jul 21, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Benjamin Lahner; Kshitij Dwivedi; Polina Iamshchinina; Monika Graumann; Alex Lascelles; Gemma Roig; Alessandro Thomas Gifford; Bowen Pan; SouYoung Jin; N.Apurva Ratan Murty; Kendrick Kay; Radoslaw Cichy*; Aude Oliva* (2024). Modeling short visual events through the BOLD Moments video fMRI dataset and metadata. [Dataset]. http://doi.org/10.18112/openneuro.ds005165.v1.0.4
    Explore at:
    Dataset updated
    Jul 21, 2024
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Benjamin Lahner; Kshitij Dwivedi; Polina Iamshchinina; Monika Graumann; Alex Lascelles; Gemma Roig; Alessandro Thomas Gifford; Bowen Pan; SouYoung Jin; N.Apurva Ratan Murty; Kendrick Kay; Radoslaw Cichy*; Aude Oliva*
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This is the data repository for the BOLD Moments Dataset. This dataset contains brain responses to 1,102 3-second videos across 10 subjects. Each subject saw the 1,000 video training set 3 times and the 102 video testing set 10 times. Each video is additionally human-annotated with 15 object labels, 5 scene labels, 5 action labels, 5 sentence text descriptions, 1 spoken transcription, 1 memorability score, and 1 memorability decay rate.

    Overview of contents:

    The home folder (everything except the derivatives/ folder) contains the raw data in BIDS format before any preprocessing. Download this folder if you want to run your own preprocessing pipeline (e.g., fMRIPrep, HCP pipeline).

    To comply with licensing requirements, the stimulus set is not available here on OpenNeuro (hence the invalid BIDS validation). See the GitHub repository (https://github.com/blahner/BOLDMomentsDataset) to download the stimulus set and stimulus set derivatives (like frames). To make this dataset perfectly BIDS compliant for use with other BIDS-apps, you may need to copy the 'stimuli' folder from the downloaded stimulus set into the parent directory.

    The derivatives folder contains all data derivatives, including the stimulus annotations (./derivatives/stimuli_metadata/annotations.json), model weight checkpoints for a TSM ResNet50 model trained on a subset of Multi-Moments in Time, and prepared beta estimates from two different fMRIPrep preprocessing pipelines (./derivatives/versionA and ./derivatives/versionB).

    VersionA was used in the main manuscript, and versionB is detailed in the manuscript's supplementary. If you are starting a new project, we highly recommend you use the prepared data in ./derivatives/versionB/ because of its better registration, use of GLMsingle, and availability in more standard/non-standard output spaces. Code used in the manuscript is located at the derivatives version level. For example, the code used in the main manuscript is located under ./derivatives/versionA/scripts. Note that versionA prepared data is very large due to beta estimates for 9 TRs per video. See this GitHub repo for starter code demonstrating basic usage and dataset download scripts: https://github.com/blahner/BOLDMomentsDataset. See this GitHub repo for the TSM ResNet50 model training and inference code: https://github.com/pbw-Berwin/M4-pretrained

    Data collection notes: All data collection notes explained below are detailed here for the purpose of full transparency and should be of no concern to researchers using the data i.e. these inconsistencies have been attended to and integrated into the BIDS format as if these exceptions did not occur. The correct pairings between field maps and functional runs are detailed in the .json sidecars accompanying each field map scan.

    Subject 2: Session 1: Subject repositioned head for comfort after the third resting state scan, approximately 1 hour into the session. New scout and field map scans were taken. In the case of applying a susceptibility distortion correction analysis, session 1 therefore has two sets of field maps, denoted by “run-1” and “run-2” in the filename. The “IntendedFor” field in the field map’s identically named .json sidecar file specifies which functional scans correspond to which field map.

    Session 4: Completed over two separate days due to subject feeling sleepy. All 3 testing runs and 6/10 training runs were completed on the first day, and the last 4 training runs were completed on the second day. Each of the two days for session 4 had its own field map. This did not interfere with session 5. All scans across both days belonging to session 4 were analyzed as if they were collected on the same day. In the case of applying a susceptibility distortion correction analysis, session 4 therefore has two sets of field maps, denoted by “run-1” and “run-2” in the filename. The “IntendedFor” field in the field map’s identically named .json sidecar file specifies which functional scans correspond to which field map.

    Subject 4: Sessions 1 and 2: The fifth (out of 5) localizer run from session 1 was completed at the end of session 2 due to a technical error. This localizer run therefore used the field map from session 2. In the case of applying a susceptibility distortion correction analysis, session 1 therefore has two sets of field maps, denoted by “run-1” and “run-2” in the filename. The “IntendedFor” field in the field map’s identically named .json sidecar file specifies which functional scans correspond to which field map.

    Subject 10: Session 5: Subject moved a lot to readjust earplug after the third functional run (1 test and 2 training runs completed). New field map scans were collected. In the case of applying a susceptibility distortion correction analysis, session 5 therefore has two sets of field maps, denoted by “run-1” and “run-2” in the filename. The “IntendedFor” field in the field map’s identically named .json sidecar file specifies which functional scans correspond to which field map.

  5. u

    CSU CHILL real-time moments data in cfRadial format

    • data.ucar.edu
    archive
    Updated Aug 1, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pat Kennedy; Steven A. Rutledge (2025). CSU CHILL real-time moments data in cfRadial format [Dataset]. http://doi.org/10.26023/QGQ0-KGA8-3713
    Explore at:
    archiveAvailable download formats
    Dataset updated
    Aug 1, 2025
    Authors
    Pat Kennedy; Steven A. Rutledge
    Time period covered
    Jan 1, 2014 - Apr 10, 2015
    Area covered
    Description

    This data set consists of cfRadial format, daily files of CSU CHILL radar data taken continuously during the FRONT (Front Range Observational Network Testbed) project. See the FRONT S-Pol Data Availability 2014-2015 document linked below to check on data availability.

  6. T

    Italy - At Risk of Poverty rate anchored at a fixed moment in time (2005):...

    • tradingeconomics.com
    csv, excel, json, xml
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    TRADING ECONOMICS, Italy - At Risk of Poverty rate anchored at a fixed moment in time (2005): Less than 18 years [Dataset]. https://tradingeconomics.com/italy/at-risk-of-poverty-rate-anchored-at-a-fixed-moment-in-time-2005-less-than-18-years-eurostat-data.html
    Explore at:
    json, csv, excel, xmlAvailable download formats
    Dataset authored and provided by
    TRADING ECONOMICS
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jan 1, 1976 - Dec 31, 2025
    Area covered
    Italy
    Description

    Italy - At Risk of Poverty rate anchored at a fixed moment in time (2005): Less than 18 years was 22.40% in December of 2024, according to the EUROSTAT. Trading Economics provides the current actual value, an historical data chart and related indicators for Italy - At Risk of Poverty rate anchored at a fixed moment in time (2005): Less than 18 years - last updated from the EUROSTAT on July of 2025. Historically, Italy - At Risk of Poverty rate anchored at a fixed moment in time (2005): Less than 18 years reached a record high of 30.70% in December of 2015 and a record low of 21.70% in December of 2022.

  7. Favorite moments to read in China 2025

    • statista.com
    Updated Jul 18, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2025). Favorite moments to read in China 2025 [Dataset]. https://www.statista.com/statistics/1308442/china-popular-book-reading-time/
    Explore at:
    Dataset updated
    Jul 18, 2025
    Dataset authored and provided by
    Statistahttp://statista.com/
    Area covered
    China
    Description

    According to the survey results on Chinese readers released in April 2025, about ** of respondents read during weekends and holidays. Pre-bedtime reading was also common among survey participants.

  8. China: frequency of users checking WeChat moments 2016

    • statista.com
    Updated Jun 26, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2016). China: frequency of users checking WeChat moments 2016 [Dataset]. https://www.statista.com/statistics/668609/china-frequency-of-users-checking-wechat-moments/
    Explore at:
    Dataset updated
    Jun 26, 2016
    Dataset authored and provided by
    Statistahttp://statista.com/
    Time period covered
    Mar 2016
    Area covered
    China
    Description

    This statistic presents the frequency of WeChat users checking WeChat moments as of March 2016. Over 61 percent of the respondents said that they check the moments every time they open WeChat.

  9. w

    A-Moment-in-Time-Photography-and-My-Studio (Company) - Reverse Whois Lookup

    • whoisdatacenter.com
    csv
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    AllHeart Web Inc, A-Moment-in-Time-Photography-and-My-Studio (Company) - Reverse Whois Lookup [Dataset]. https://whoisdatacenter.com/company/A-Moment-in-Time-Photography-and-My-Studio/
    Explore at:
    csvAvailable download formats
    Dataset authored and provided by
    AllHeart Web Inc
    License

    https://whoisdatacenter.com/terms-of-use/https://whoisdatacenter.com/terms-of-use/

    Time period covered
    Mar 15, 1985 - Jun 18, 2025
    Description

    Uncover historical ownership history and changes over time by performing a reverse Whois lookup for the company A-Moment-in-Time-Photography-and-My-Studio.

  10. u

    NCAR S-Pol real-time moments data

    • data.ucar.edu
    archive
    Updated Aug 1, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NCAR/EOL S-Pol Team (2025). NCAR S-Pol real-time moments data [Dataset]. http://doi.org/10.5065/D6N29V5H
    Explore at:
    archiveAvailable download formats
    Dataset updated
    Aug 1, 2025
    Authors
    NCAR/EOL S-Pol Team
    Time period covered
    Jan 1, 2014 - Apr 10, 2015
    Area covered
    Description

    S-PolKa radar data taken continuously during the FRONT (Front Range Observational Network Testbed) project. The data is in CfRadial format. See the FRONT S-Pol Data Availability 2014-2015 document linked below to check on data availability.

  11. e

    MAVEN SWIA Calibrated Onboard Survey Moment Data Collection - Dataset -...

    • b2find.eudat.eu
    Updated Aug 8, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). MAVEN SWIA Calibrated Onboard Survey Moment Data Collection - Dataset - B2FIND [Dataset]. https://b2find.eudat.eu/dataset/38ad52b1-3add-5b10-881e-3b8c6395fad1
    Explore at:
    Dataset updated
    Aug 8, 2024
    Description

    SWIA onboard moment files with time-ordered ion moments converted to physical units and coordinates, as computed onboard from Coarse and Fine ion distributions, as well as a header with ancillary information needed to interpret the moments.

  12. T

    Denmark - At Risk of Poverty rate anchored at a fixed moment in time (2005):...

    • tradingeconomics.com
    csv, excel, json, xml
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    TRADING ECONOMICS, Denmark - At Risk of Poverty rate anchored at a fixed moment in time (2005): Less than 18 years [Dataset]. https://tradingeconomics.com/denmark/at-risk-of-poverty-rate-anchored-at-a-fixed-moment-in-time-2005-less-than-18-years-eurostat-data.html
    Explore at:
    csv, json, xml, excelAvailable download formats
    Dataset authored and provided by
    TRADING ECONOMICS
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jan 1, 1976 - Dec 31, 2025
    Area covered
    Denmark
    Description

    Denmark - At Risk of Poverty rate anchored at a fixed moment in time (2005): Less than 18 years was 10.90% in December of 2024, according to the EUROSTAT. Trading Economics provides the current actual value, an historical data chart and related indicators for Denmark - At Risk of Poverty rate anchored at a fixed moment in time (2005): Less than 18 years - last updated from the EUROSTAT on August of 2025. Historically, Denmark - At Risk of Poverty rate anchored at a fixed moment in time (2005): Less than 18 years reached a record high of 10.90% in December of 2024 and a record low of 5.90% in December of 2017.

  13. g

    At-risk-of-poverty rate anchored at a fixed moment in time (2019) by age...

    • gimi9.com
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    At-risk-of-poverty rate anchored at a fixed moment in time (2019) by age group - EU-SILC survey | gimi9.com [Dataset]. https://gimi9.com/dataset/eu_jpglxsm0y8yxbqfbix2a/
    Explore at:
    Description

    The indicator is defined as the percentage of the population whose equivalised disposable income is below the ‘at-risk-of-poverty threshold’ calculated in the standard way for the base year, currently 2005, and then adjusted for inflation.

  14. s

    At-risk-of-poverty rate anchored at a fixed moment in time (2005) by age...

    • store.smartdatahub.io
    Updated Jul 20, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2019). At-risk-of-poverty rate anchored at a fixed moment in time (2005) by age group - EU-SILC survey [Dataset]. https://store.smartdatahub.io/dataset/fi_statistics_finland_tesov092_px
    Explore at:
    Dataset updated
    Jul 20, 2019
    Description

    At-risk-of-poverty rate anchored at a fixed moment in time (2005) by age group - EU-SILC survey

  15. MMS 1 Fast Plasma Investigation, Dual Electron Spectrometer (FPI, DES)...

    • res1catalogd-o-tdatad-o-tgov.vcapture.xyz
    • s.cnmilf.com
    • +1more
    Updated Jul 11, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    MMS Science Data Center;NASA Space Physics Data Facility (SPDF) Coordinated Data Analysis Web (CDAWeb) Data Services (2025). MMS 1 Fast Plasma Investigation, Dual Electron Spectrometer (FPI, DES) Distribution Moments, Level 2 (L2), Fast Mode, 4.5 s Data [Dataset]. https://res1catalogd-o-tdatad-o-tgov.vcapture.xyz/dataset/mms-1-fast-plasma-investigation-dual-electron-spectrometer-fpi-des-distribution-moments-le
    Explore at:
    Dataset updated
    Jul 11, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    The Fast Plasma Instrument (FPI) usually Operates in Fast Survey (FS) Mode in the MMS Region Of Interest (ROI) for the current Mission Phase. Data are taken at Burst (30/150 ms for DES/DIS) Resolution are aggregated onboard and made available at Survey (4.5 s) Resolution in this Mode. Planning around Calibration Activities, avoidance of Earth Radiation Belts, etc., when possible, FPI usually Operates in Slow Survey (SS) Mode outside of ROI, and then only the 60 s Resolution Survey Data are available. This Product contains Results from integrating the standard Moments of Phase Space Distributions formed from the indicated Data Type (DES/DIS Burst, FS or SS). For Convenience, some additional Parameters are included to augment those most commonly found in a Moments Product of this sort, plus Time Stamps and other Annotation characterizing the State of the Instrument System at the indicated Time.

  16. g

    OMNI, Combined Solar Wind Plasma Moments and Interplanetary Magnetic Field...

    • gimi9.com
    Updated Sep 6, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2019). OMNI, Combined Solar Wind Plasma Moments and Interplanetary Magnetic Field (IMF) Time-Shifted to the Nose of the Earth's Bow Shock, plus Geomagnetic Indices, 5 min Data | gimi9.com [Dataset]. https://gimi9.com/dataset/data-gov_omni-combined-solar-wind-plasma-moments-and-interplanetary-magnetic-field-imf-time-shifted/
    Explore at:
    Dataset updated
    Sep 6, 2019
    Area covered
    Earth
    Description

    Near-Earth Heliospheric Data, OMNI, Definitive Multispacecraft Interplanetary Parameters Data, 5 min averagedAdditional information for all parameters are available from OMNI Data Documentation: https://omniweb..sci.gsfc.nasa.gov/html/HROdocum.htmlNew data may be accesible via the Space Physics Data Facility, SPDF, OMNIWeb Service: https://omniweb.gsfc.nasa.gov/ow_min.htmlThe Modified (Level-3) High Resolution OMNI data files are made in the same format as the OMNI files based on SWE Key Parameter data. There are a few differences between old and new high resolution OMNI data sets:* 1) In the newly modified Level-3 OMNI data files, we used the Wind SWE plasma definitive data rather than the Wind SWE plasma KP-despiked data. Using the definitive data give us possibility to include the Alpha/Proton Density Ratio and to use more accurate plasma parameters. However, the time coverage in the new OMNI data was decreased by from 2% to 10%. See the data description at https://spdf.gsfc.nasa.gov/pub/data/omni/high_res_omni/modified/. For detail comparison 1 min SWE definitive and cross-normalized SWE Key Parameter data sets, see https://omniweb.gsfc.nasa.gov/ftpbrowser/wind_pla_def_kp_norm.html.* 2) To keep the number of words and the record lengths the same as in the old OMNI high resolution data set, we replaced the PCN Index (word #45) in the ASCII records with the new Alpha/Proton Density Ratio parameter.* 3) The latest date for these new data is usually behind of the OMNI based on SWE_KP data.Modifications:* 1) Conversion to ISTP/IACG CDFs via SKTEditor, February 2000* 2) Time tags in CDAWeb version were modified to use the CDAWeb convention of having mid-average time tags rather than OMNI original convention of start-of-average time tags, March 2005

  17. t

    At-risk-of-poverty rate anchored at a fixed moment in time (2019) by age and...

    • service.tib.eu
    Updated Jan 8, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). At-risk-of-poverty rate anchored at a fixed moment in time (2019) by age and sex [Dataset]. https://service.tib.eu/ldmservice/dataset/eurostat_8tt5a4p0eyiaxksdddpd8w
    Explore at:
    Dataset updated
    Jan 8, 2025
    Description

    At-risk-of-poverty rate anchored at a fixed moment in time (2019) by age and sex

  18. u

    NCAR S-PolKa radar moments data, Ka-band, Marshall field site

    • data.ucar.edu
    netcdf
    Updated Aug 1, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NCAR/EOL S-Pol Team (2025). NCAR S-PolKa radar moments data, Ka-band, Marshall field site [Dataset]. http://doi.org/10.26023/ZAV7-XMJN-JP0B
    Explore at:
    netcdfAvailable download formats
    Dataset updated
    Aug 1, 2025
    Authors
    NCAR/EOL S-Pol Team
    Time period covered
    Dec 20, 2017 - Jan 1, 9999
    Area covered
    Description

    Selected time periods of radar moments data collected by the S-PolKa radar at the Marshall field site in Colorado. S-PolKa is usually run at the Marshall field site for testing and maintenance purposes so the data in this dataset contains data from various different instrument states. It is not quality controlled. If you would like to use this data set for research purposes, please contact us first at the email address below.

  19. shows the correlations, at observation-level, between negative affect,...

    • plos.figshare.com
    xls
    Updated Jun 5, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Marieke Wichers; Zuzana Kasanova; Jindra Bakker; Evert Thiery; Catherine Derom; Nele Jacobs; Jim van Os (2023). shows the correlations, at observation-level, between negative affect, positive affect, (un)pleasantness of company and physical activity. [Dataset]. http://doi.org/10.1371/journal.pone.0129722.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 5, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Marieke Wichers; Zuzana Kasanova; Jindra Bakker; Evert Thiery; Catherine Derom; Nele Jacobs; Jim van Os
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    shows the correlations, at observation-level, between negative affect, positive affect, (un)pleasantness of company and physical activity.

  20. Replication data for Only a Moment in Time? The Changing Effectiveness of...

    • figshare.com
    bin
    Updated May 9, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kristian Skrede Gleditsch (2025). Replication data for Only a Moment in Time? The Changing Effectiveness of Mass Mobilization on Transitions to Democracy [Dataset]. http://doi.org/10.6084/m9.figshare.28981298.v1
    Explore at:
    binAvailable download formats
    Dataset updated
    May 9, 2025
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Kristian Skrede Gleditsch
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This archive provides access to the raw data and routines in the empirical analyses reported in "Only a Moment in Time? The Changing Effectiveness of Mass Mobilization on Transitions to Democracy"All the data used are from fully open sources. A full replication archive will be made available on publication of the article

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Michael Joannou; Pia Rotshtein; Uta Noppeney (2024). Description of data in test_set.csv. [Dataset]. http://doi.org/10.1371/journal.pone.0301098.t004

Description of data in test_set.csv.

Related Article
Explore at:
xlsAvailable download formats
Dataset updated
Apr 1, 2024
Dataset provided by
PLOS ONE
Authors
Michael Joannou; Pia Rotshtein; Uta Noppeney
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

We present Audiovisual Moments in Time (AVMIT), a large-scale dataset of audiovisual action events. In an extensive annotation task 11 participants labelled a subset of 3-second audiovisual videos from the Moments in Time dataset (MIT). For each trial, participants assessed whether the labelled audiovisual action event was present and whether it was the most prominent feature of the video. The dataset includes the annotation of 57,177 audiovisual videos, each independently evaluated by 3 of 11 trained participants. From this initial collection, we created a curated test set of 16 distinct action classes, with 60 videos each (960 videos). We also offer 2 sets of pre-computed audiovisual feature embeddings, using VGGish/YamNet for audio data and VGG16/EfficientNetB0 for visual data, thereby lowering the barrier to entry for audiovisual DNN research. We explored the advantages of AVMIT annotations and feature embeddings to improve performance on audiovisual event recognition. A series of 6 Recurrent Neural Networks (RNNs) were trained on either AVMIT-filtered audiovisual events or modality-agnostic events from MIT, and then tested on our audiovisual test set. In all RNNs, top 1 accuracy was increased by 2.71-5.94% by training exclusively on audiovisual events, even outweighing a three-fold increase in training data. Additionally, we introduce the Supervised Audiovisual Correspondence (SAVC) task whereby a classifier must discern whether audio and visual streams correspond to the same action label. We trained 6 RNNs on the SAVC task, with or without AVMIT-filtering, to explore whether AVMIT is helpful for cross-modal learning. In all RNNs, accuracy improved by 2.09-19.16% with AVMIT-filtered data. We anticipate that the newly annotated AVMIT dataset will serve as a valuable resource for research and comparative experiments involving computational models and human participants, specifically when addressing research questions where audiovisual correspondence is of critical importance.

Search
Clear search
Close search
Google apps
Main menu