Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
We present Audiovisual Moments in Time (AVMIT), a large-scale dataset of audiovisual action events. In an extensive annotation task 11 participants labelled a subset of 3-second audiovisual videos from the Moments in Time dataset (MIT). For each trial, participants assessed whether the labelled audiovisual action event was present and whether it was the most prominent feature of the video. The dataset includes the annotation of 57,177 audiovisual videos, each independently evaluated by 3 of 11 trained participants. From this initial collection, we created a curated test set of 16 distinct action classes, with 60 videos each (960 videos). We also offer 2 sets of pre-computed audiovisual feature embeddings, using VGGish/YamNet for audio data and VGG16/EfficientNetB0 for visual data, thereby lowering the barrier to entry for audiovisual DNN research. We explored the advantages of AVMIT annotations and feature embeddings to improve performance on audiovisual event recognition. A series of 6 Recurrent Neural Networks (RNNs) were trained on either AVMIT-filtered audiovisual events or modality-agnostic events from MIT, and then tested on our audiovisual test set. In all RNNs, top 1 accuracy was increased by 2.71-5.94% by training exclusively on audiovisual events, even outweighing a three-fold increase in training data. Additionally, we introduce the Supervised Audiovisual Correspondence (SAVC) task whereby a classifier must discern whether audio and visual streams correspond to the same action label. We trained 6 RNNs on the SAVC task, with or without AVMIT-filtering, to explore whether AVMIT is helpful for cross-modal learning. In all RNNs, accuracy improved by 2.09-19.16% with AVMIT-filtered data. We anticipate that the newly annotated AVMIT dataset will serve as a valuable resource for research and comparative experiments involving computational models and human participants, specifically when addressing research questions where audiovisual correspondence is of critical importance.
bryant1410/moments-in-time dataset hosted on Hugging Face and contributed by the HF Datasets community
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This paper adopts a spatial probit approach to explain interaction effects among cross-sectional units when the dependent variable takes the form of a binary response variable and transitions from state 0 to 1 occur at different moments in time. The model has two spatially lagged variables: one for units that are still in state 0 and one for units that had already transferred to state 1. The parameters are estimated on observations for those units that are still in state 0 at the start of the different time periods, whereas observations on units after they transferred to state 1 are discarded, just as in the literature on duration modeling. Furthermore, neighboring units that had not yet transferred may have a different impact from units that had already transferred. We illustrate our approach with an empirical study of the adoption of inflation targeting for a sample of 58 countries over the period 1985-2008.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This is the data repository for the BOLD Moments Dataset. This dataset contains brain responses to 1,102 3-second videos across 10 subjects. Each subject saw the 1,000 video training set 3 times and the 102 video testing set 10 times. Each video is additionally human-annotated with 15 object labels, 5 scene labels, 5 action labels, 5 sentence text descriptions, 1 spoken transcription, 1 memorability score, and 1 memorability decay rate.
Overview of contents:
The home folder (everything except the derivatives/ folder) contains the raw data in BIDS format before any preprocessing. Download this folder if you want to run your own preprocessing pipeline (e.g., fMRIPrep, HCP pipeline).
To comply with licensing requirements, the stimulus set is not available here on OpenNeuro (hence the invalid BIDS validation). See the GitHub repository (https://github.com/blahner/BOLDMomentsDataset) to download the stimulus set and stimulus set derivatives (like frames). To make this dataset perfectly BIDS compliant for use with other BIDS-apps, you may need to copy the 'stimuli' folder from the downloaded stimulus set into the parent directory.
The derivatives folder contains all data derivatives, including the stimulus annotations (./derivatives/stimuli_metadata/annotations.json), model weight checkpoints for a TSM ResNet50 model trained on a subset of Multi-Moments in Time, and prepared beta estimates from two different fMRIPrep preprocessing pipelines (./derivatives/versionA and ./derivatives/versionB).
VersionA was used in the main manuscript, and versionB is detailed in the manuscript's supplementary. If you are starting a new project, we highly recommend you use the prepared data in ./derivatives/versionB/ because of its better registration, use of GLMsingle, and availability in more standard/non-standard output spaces. Code used in the manuscript is located at the derivatives version level. For example, the code used in the main manuscript is located under ./derivatives/versionA/scripts. Note that versionA prepared data is very large due to beta estimates for 9 TRs per video. See this GitHub repo for starter code demonstrating basic usage and dataset download scripts: https://github.com/blahner/BOLDMomentsDataset. See this GitHub repo for the TSM ResNet50 model training and inference code: https://github.com/pbw-Berwin/M4-pretrained
Data collection notes: All data collection notes explained below are detailed here for the purpose of full transparency and should be of no concern to researchers using the data i.e. these inconsistencies have been attended to and integrated into the BIDS format as if these exceptions did not occur. The correct pairings between field maps and functional runs are detailed in the .json sidecars accompanying each field map scan.
Subject 2: Session 1: Subject repositioned head for comfort after the third resting state scan, approximately 1 hour into the session. New scout and field map scans were taken. In the case of applying a susceptibility distortion correction analysis, session 1 therefore has two sets of field maps, denoted by “run-1” and “run-2” in the filename. The “IntendedFor” field in the field map’s identically named .json sidecar file specifies which functional scans correspond to which field map.
Session 4: Completed over two separate days due to subject feeling sleepy. All 3 testing runs and 6/10 training runs were completed on the first day, and the last 4 training runs were completed on the second day. Each of the two days for session 4 had its own field map. This did not interfere with session 5. All scans across both days belonging to session 4 were analyzed as if they were collected on the same day. In the case of applying a susceptibility distortion correction analysis, session 4 therefore has two sets of field maps, denoted by “run-1” and “run-2” in the filename. The “IntendedFor” field in the field map’s identically named .json sidecar file specifies which functional scans correspond to which field map.
Subject 4: Sessions 1 and 2: The fifth (out of 5) localizer run from session 1 was completed at the end of session 2 due to a technical error. This localizer run therefore used the field map from session 2. In the case of applying a susceptibility distortion correction analysis, session 1 therefore has two sets of field maps, denoted by “run-1” and “run-2” in the filename. The “IntendedFor” field in the field map’s identically named .json sidecar file specifies which functional scans correspond to which field map.
Subject 10: Session 5: Subject moved a lot to readjust earplug after the third functional run (1 test and 2 training runs completed). New field map scans were collected. In the case of applying a susceptibility distortion correction analysis, session 5 therefore has two sets of field maps, denoted by “run-1” and “run-2” in the filename. The “IntendedFor” field in the field map’s identically named .json sidecar file specifies which functional scans correspond to which field map.
This data set consists of cfRadial format, daily files of CSU CHILL radar data taken continuously during the FRONT (Front Range Observational Network Testbed) project. See the FRONT S-Pol Data Availability 2014-2015 document linked below to check on data availability.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Italy - At Risk of Poverty rate anchored at a fixed moment in time (2005): Less than 18 years was 22.40% in December of 2024, according to the EUROSTAT. Trading Economics provides the current actual value, an historical data chart and related indicators for Italy - At Risk of Poverty rate anchored at a fixed moment in time (2005): Less than 18 years - last updated from the EUROSTAT on July of 2025. Historically, Italy - At Risk of Poverty rate anchored at a fixed moment in time (2005): Less than 18 years reached a record high of 30.70% in December of 2015 and a record low of 21.70% in December of 2022.
According to the survey results on Chinese readers released in April 2025, about ** of respondents read during weekends and holidays. Pre-bedtime reading was also common among survey participants.
This statistic presents the frequency of WeChat users checking WeChat moments as of March 2016. Over 61 percent of the respondents said that they check the moments every time they open WeChat.
https://whoisdatacenter.com/terms-of-use/https://whoisdatacenter.com/terms-of-use/
Uncover historical ownership history and changes over time by performing a reverse Whois lookup for the company A-Moment-in-Time-Photography-and-My-Studio.
S-PolKa radar data taken continuously during the FRONT (Front Range Observational Network Testbed) project. The data is in CfRadial format. See the FRONT S-Pol Data Availability 2014-2015 document linked below to check on data availability.
SWIA onboard moment files with time-ordered ion moments converted to physical units and coordinates, as computed onboard from Coarse and Fine ion distributions, as well as a header with ancillary information needed to interpret the moments.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Denmark - At Risk of Poverty rate anchored at a fixed moment in time (2005): Less than 18 years was 10.90% in December of 2024, according to the EUROSTAT. Trading Economics provides the current actual value, an historical data chart and related indicators for Denmark - At Risk of Poverty rate anchored at a fixed moment in time (2005): Less than 18 years - last updated from the EUROSTAT on August of 2025. Historically, Denmark - At Risk of Poverty rate anchored at a fixed moment in time (2005): Less than 18 years reached a record high of 10.90% in December of 2024 and a record low of 5.90% in December of 2017.
The indicator is defined as the percentage of the population whose equivalised disposable income is below the ‘at-risk-of-poverty threshold’ calculated in the standard way for the base year, currently 2005, and then adjusted for inflation.
At-risk-of-poverty rate anchored at a fixed moment in time (2005) by age group - EU-SILC survey
The Fast Plasma Instrument (FPI) usually Operates in Fast Survey (FS) Mode in the MMS Region Of Interest (ROI) for the current Mission Phase. Data are taken at Burst (30/150 ms for DES/DIS) Resolution are aggregated onboard and made available at Survey (4.5 s) Resolution in this Mode. Planning around Calibration Activities, avoidance of Earth Radiation Belts, etc., when possible, FPI usually Operates in Slow Survey (SS) Mode outside of ROI, and then only the 60 s Resolution Survey Data are available. This Product contains Results from integrating the standard Moments of Phase Space Distributions formed from the indicated Data Type (DES/DIS Burst, FS or SS). For Convenience, some additional Parameters are included to augment those most commonly found in a Moments Product of this sort, plus Time Stamps and other Annotation characterizing the State of the Instrument System at the indicated Time.
Near-Earth Heliospheric Data, OMNI, Definitive Multispacecraft Interplanetary Parameters Data, 5 min averagedAdditional information for all parameters are available from OMNI Data Documentation: https://omniweb..sci.gsfc.nasa.gov/html/HROdocum.htmlNew data may be accesible via the Space Physics Data Facility, SPDF, OMNIWeb Service: https://omniweb.gsfc.nasa.gov/ow_min.htmlThe Modified (Level-3) High Resolution OMNI data files are made in the same format as the OMNI files based on SWE Key Parameter data. There are a few differences between old and new high resolution OMNI data sets:* 1) In the newly modified Level-3 OMNI data files, we used the Wind SWE plasma definitive data rather than the Wind SWE plasma KP-despiked data. Using the definitive data give us possibility to include the Alpha/Proton Density Ratio and to use more accurate plasma parameters. However, the time coverage in the new OMNI data was decreased by from 2% to 10%. See the data description at https://spdf.gsfc.nasa.gov/pub/data/omni/high_res_omni/modified/. For detail comparison 1 min SWE definitive and cross-normalized SWE Key Parameter data sets, see https://omniweb.gsfc.nasa.gov/ftpbrowser/wind_pla_def_kp_norm.html.* 2) To keep the number of words and the record lengths the same as in the old OMNI high resolution data set, we replaced the PCN Index (word #45) in the ASCII records with the new Alpha/Proton Density Ratio parameter.* 3) The latest date for these new data is usually behind of the OMNI based on SWE_KP data.Modifications:* 1) Conversion to ISTP/IACG CDFs via SKTEditor, February 2000* 2) Time tags in CDAWeb version were modified to use the CDAWeb convention of having mid-average time tags rather than OMNI original convention of start-of-average time tags, March 2005
At-risk-of-poverty rate anchored at a fixed moment in time (2019) by age and sex
Selected time periods of radar moments data collected by the S-PolKa radar at the Marshall field site in Colorado. S-PolKa is usually run at the Marshall field site for testing and maintenance purposes so the data in this dataset contains data from various different instrument states. It is not quality controlled. If you would like to use this data set for research purposes, please contact us first at the email address below.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
shows the correlations, at observation-level, between negative affect, positive affect, (un)pleasantness of company and physical activity.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This archive provides access to the raw data and routines in the empirical analyses reported in "Only a Moment in Time? The Changing Effectiveness of Mass Mobilization on Transitions to Democracy"All the data used are from fully open sources. A full replication archive will be made available on publication of the article
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
We present Audiovisual Moments in Time (AVMIT), a large-scale dataset of audiovisual action events. In an extensive annotation task 11 participants labelled a subset of 3-second audiovisual videos from the Moments in Time dataset (MIT). For each trial, participants assessed whether the labelled audiovisual action event was present and whether it was the most prominent feature of the video. The dataset includes the annotation of 57,177 audiovisual videos, each independently evaluated by 3 of 11 trained participants. From this initial collection, we created a curated test set of 16 distinct action classes, with 60 videos each (960 videos). We also offer 2 sets of pre-computed audiovisual feature embeddings, using VGGish/YamNet for audio data and VGG16/EfficientNetB0 for visual data, thereby lowering the barrier to entry for audiovisual DNN research. We explored the advantages of AVMIT annotations and feature embeddings to improve performance on audiovisual event recognition. A series of 6 Recurrent Neural Networks (RNNs) were trained on either AVMIT-filtered audiovisual events or modality-agnostic events from MIT, and then tested on our audiovisual test set. In all RNNs, top 1 accuracy was increased by 2.71-5.94% by training exclusively on audiovisual events, even outweighing a three-fold increase in training data. Additionally, we introduce the Supervised Audiovisual Correspondence (SAVC) task whereby a classifier must discern whether audio and visual streams correspond to the same action label. We trained 6 RNNs on the SAVC task, with or without AVMIT-filtering, to explore whether AVMIT is helpful for cross-modal learning. In all RNNs, accuracy improved by 2.09-19.16% with AVMIT-filtered data. We anticipate that the newly annotated AVMIT dataset will serve as a valuable resource for research and comparative experiments involving computational models and human participants, specifically when addressing research questions where audiovisual correspondence is of critical importance.