100+ datasets found
  1. o

    Context

    • opencontext.org
    Updated Sep 30, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Levent Atici; Sarah W. Kansa; Justin SE. Lev-Tov (2022). Context [Dataset]. https://opencontext.org/predicates/dc501639-5b33-4305-7d85-6a48502f732b
    Explore at:
    Dataset updated
    Sep 30, 2022
    Dataset provided by
    Open Context
    Authors
    Levent Atici; Sarah W. Kansa; Justin SE. Lev-Tov
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    An Open Context "predicates" dataset item. Open Context publishes structured data as granular, URL identified Web resources. This "Variables" record is part of the "Chogha Mish Fauna" data publication.

  2. Dataset used in "Free context smartphone based application for motor...

    • data.europa.eu
    unknown
    Updated Jul 3, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zenodo (2025). Dataset used in "Free context smartphone based application for motor activity levels recognition" [Dataset]. https://data.europa.eu/data/datasets/oai-zenodo-org-1244094?locale=da
    Explore at:
    unknown(28563853)Available download formats
    Dataset updated
    Jul 3, 2025
    Dataset authored and provided by
    Zenodohttp://zenodo.org/
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This is the data set used in the paper "Free context smartphone based application for motor activity levels recognition", 2016 IEEE 2nd International Forum on Research and Technologies for Society and Industry Leveraging a better tomorrow (RTSI), Bologna, 2016, pp1-4. The data refer to three subjects (i.e. subject1, subject2 and subject3). For each subject a folder is created. The folder contains data used for training and for test in all the conditions addressed by the reference paper. Activities are labeled by the last character of the filename as follows: 1-2 resting; 3-6 walking; 7-8 running; 9-12 climbing stairs

  3. Data from: Context-Aware 3D Object Anchoring for Mobile Robots Dataset

    • zenodo.org
    • portaldelainvestigacion.uma.es
    • +2more
    bin, bz2, mp4
    Updated Jan 21, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Martin Günther; Martin Günther; José Raúl Ruiz-Sarmiento; José Raúl Ruiz-Sarmiento; Cipriano Galindo; Cipriano Galindo; Javier González-Jiménez; Javier González-Jiménez; Joachim Hertzberg; Joachim Hertzberg (2020). Context-Aware 3D Object Anchoring for Mobile Robots Dataset [Dataset]. http://doi.org/10.5281/zenodo.1257047
    Explore at:
    bz2, bin, mp4Available download formats
    Dataset updated
    Jan 21, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Martin Günther; Martin Günther; José Raúl Ruiz-Sarmiento; José Raúl Ruiz-Sarmiento; Cipriano Galindo; Cipriano Galindo; Javier González-Jiménez; Javier González-Jiménez; Joachim Hertzberg; Joachim Hertzberg
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset accompanies the following publication:

    Günther, M.; Ruiz-Sarmiento, J. R.; Galindo, C.; González-Jiménez, J. & Hertzberg, J. Context-Aware 3D Object Anchoring for Mobile Robots. Robot. Auton. Syst., 2018 (accepted)

    The dataset consists of 15 scenes inspected by a robot equipped with a RGB-D camera driving around a table and turning towards it from different locations. The table contained a number of objects in varying table settings. In total, the dataset contains 1387 seconds of observation and 144 unique objects from 9 categories:

    • SugarPot
    • MilkPot
    • CoffeeJug
    • MobilePhone
    • Mug
    • Dish
    • Fork
    • Knife
    • Spoon
    • TableSign

    Segmentation, tracking and local object recognition was run on the recorded sensor data, and its output (tracked objects and local recognition results) was added to the dataset. Since the objects were observed from multiple perspectives and tracking was lost while the robot was moving from one observation pose to another, the dataset contains more than one track ID for most objects (one for each subsequent observation of the object). Each track ID was manually labeled with the ground truth category of the object it represented. Additionally, all track IDs belonging to the same object were manually grouped together to allow evaluation of the anchoring process. Track IDs that did not correspond to any object on the table (but instead to objects on different tables, pieces of the table itself or other artifacts) were manually removed. In total, out of 432 track IDs, 410 (94.9 %) were associated with true objects, while 22 (5.1 %) were removed as artifacts.


    File contents

    All data is provided as rosbags. The naming scheme is as follows:

    • `*-sensordata.bag.bz2`: The raw sensor data from the robot and all transform data, including localization in a map.
    • `*-perception.bag.bz2`: The object recognition results and ground truth information for the tracked objects.
    • `scene??-pr2-*.bag.bz2`: 5 scenes that were recorded using the PR2 robot.
    • `scene??-calvin-*.bag.bz2`: 10 scenes that were recorded using the Calvin robot.

    Both robots used an ASUS Xtion Pro Live as 3D camera.

    `race_vision_msgs.tar.bz2`: The custom messages used in the `-perception` rosbags, as a ROS Kinetic package.


    Videos

    To get a first impression of the dataset, `scene10.mp4` and `scene19.mp4` show the corresponding scenes from the point of view of the robot's RGB camera.

  4. Colors in Context

    • kaggle.com
    zip
    Updated Apr 28, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nicholas Tomlin (2022). Colors in Context [Dataset]. https://www.kaggle.com/nickatomlin/colors
    Explore at:
    zip(2263670 bytes)Available download formats
    Dataset updated
    Apr 28, 2022
    Authors
    Nicholas Tomlin
    Description

    Dataset

    This dataset was created by Nicholas Tomlin

    Contents

  5. c

    Historic Context Statements

    • s.cnmilf.com
    • data.sfgov.org
    • +2more
    Updated Oct 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    data.sfgov.org (2025). Historic Context Statements [Dataset]. https://s.cnmilf.com/user74170196/https/catalog.data.gov/dataset/historic-context-statements
    Explore at:
    Dataset updated
    Oct 4, 2025
    Dataset provided by
    data.sfgov.org
    Description

    This data includes the City’s Adopted Historic Context Statements, reviewed and adopted by the Historic Preservation Commission. Historic Context Statements provide background history of particular neighborhoods, themes, or cultures and tie this history to sites within the built environment. These statements help planners in the identification, evaluation, interpretation, and designation of historic sites related to a particular history. For more information please see https://sfplanning.org/resource/historic-context-statements

  6. o

    Period (context)

    • opencontext.org
    Updated Jul 16, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Douglas R Clark; Larry G Herr (2023). Period (context) [Dataset]. https://opencontext.org/predicates/e37439f2-4a10-4163-af4d-730f8c9dfe9c
    Explore at:
    Dataset updated
    Jul 16, 2023
    Dataset provided by
    Open Context
    Authors
    Douglas R Clark; Larry G Herr
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    An Open Context "predicates" dataset item. Open Context publishes structured data as granular, URL identified Web resources. This "Variables" record is part of the "Madaba Plains Project-`Umayri" data publication.

  7. H

    Replication Data for: 'A Retrieved-Context Theory of Financial Decisions'

    • dataverse.harvard.edu
    • search.dataone.org
    Updated Apr 2, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jessica A. Wachter; Michael Jacob Kahana (2024). Replication Data for: 'A Retrieved-Context Theory of Financial Decisions' [Dataset]. http://doi.org/10.7910/DVN/PQRZMT
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Apr 2, 2024
    Dataset provided by
    Harvard Dataverse
    Authors
    Jessica A. Wachter; Michael Jacob Kahana
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    The data and programs replicate tables and figures from "A Retrieved-Context Theory of Financial Decisions", by Wachter and Kahana. Please see the README file for additional details.

  8. d

    Data from: The Dynamic Context of Teen Dating Violence in Adolescent...

    • catalog.data.gov
    • datasets.ai
    • +2more
    Updated Nov 14, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Institute of Justice (2025). The Dynamic Context of Teen Dating Violence in Adolescent Relationships, Baltimore, Maryland, 2014-2016 [Dataset]. https://catalog.data.gov/dataset/the-dynamic-context-of-teen-dating-violence-in-adolescent-relationships-baltimore-mar-2014-5664d
    Explore at:
    Dataset updated
    Nov 14, 2025
    Dataset provided by
    National Institute of Justice
    Area covered
    Baltimore, Maryland
    Description

    These data are part of NACJD's Fast Track Release and are distributed as they were received from the data depositor. The files have been zipped by NACJD for release, but not checked or processed except for the removal of direct identifiers. Users should refer to the accompanying readme file for a brief description of the files available with this collection and consult the investigator(s) if further information is needed. Teenage adolescent females residing in Baltimore, Maryland who were involved in a relationship with a history of violence were sought after to participate in this research study. Respondents were interviewed and then followed through daily diary entries for several months. The aim of the research was to understand the context regarding teen dating violence (TDV). Prior research on relationship context has not focused on minority populations; therefore, the focus of this project was urban, predominantly African American females. The available data in this collection includes three SAS (.sas7bdat) files and a single SAS formats file that contains variable and value label information for all three data files. The three data files are: final_baseline.sas7bdat (157 cases / 252 variables) final_partnergrid.sas7bdat (156 cases / 76 variables) hart_final_sas7bdata (7004 cases / 23 variables)

  9. m

    Semantic Similarity with Concept Senses: new Experiment

    • data.mendeley.com
    Updated Oct 24, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Francesco Taglino (2022). Semantic Similarity with Concept Senses: new Experiment [Dataset]. http://doi.org/10.17632/v2bwh7z8kj.1
    Explore at:
    Dataset updated
    Oct 24, 2022
    Authors
    Francesco Taglino
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset represents the results of the experimentation of a method for evaluating semantic similarity between concepts in a taxonomy. The method is based on the information-theoretic approach and allows senses of concepts in a given context to be considered. Relevance of senses is calculated in terms of semantic relatedness with the compared concepts. In a previous work [9], the adopted semantic relatedness method was the one described in [10], while in this work we also adopted the ones described in [11], [12], [13], [14], [15], and [16].

    We applied our proposal by extending 7 methods for computing semantic similarity in a taxonomy, selected from the literature. The methods considered in the experiment are referred to as R[2], W&P[3], L[4], J&C[5], P&S[6], A[7], and A&M[8]

    The experiment was run on the well-known Miller and Charles benchmark dataset [1] for assessing semantic similarity.

    The results are organized in seven folders, each with the results related to one of the above semantic relatedness methods. In each folder there is a set of files, each referring to one pair of the Miller and Charles dataset. In fact, for each pair of concepts, all the 28 pairs are considered as possible different contexts.

    REFERENCES [1] Miller G.A., Charles W.G. 1991. Contextual correlates of semantic similarity. Language and Cognitive Processes 6(1). [2] Resnik P. 1995. Using Information Content to Evaluate Semantic Similarity in a Taxonomy. Int. Joint Conf. on Artificial Intelligence, Montreal. [3] Wu Z., Palmer M. 1994. Verb semantics and lexical selection. 32nd Annual Meeting of the Associations for Computational Linguistics. [4] Lin D. 1998. An Information-Theoretic Definition of Similarity. Int. Conf. on Machine Learning. [5] Jiang J.J., Conrath D.W. 1997. Semantic Similarity Based on Corpus Statistics and Lexical Taxonomy. Inter. Conf. Research on Computational Linguistics. [6] Pirrò G. 2009. A Semantic Similarity Metric Combining Features and Intrinsic Information Content. Data Knowl. Eng, 68(11). [7] Adhikari A., Dutta B., Dutta A., Mondal D., Singh S. 2018. An intrinsic information content-based semantic similarity measure considering the disjoint common subsumers of concepts of an ontology. J. Assoc. Inf. Sci. Technol. 69(8). [8] Adhikari A., Singh S., Mondal D., Dutta B., Dutta A. 2016. A Novel Information Theoretic Framework for Finding Semantic Similarity in WordNet. CoRR, arXiv:1607.05422, abs/1607.05422. [9] Formica A., Taglino F. 2021. An Enriched Information-Theoretic Definition of Semantic Similarity in a Taxonomy. IEEE Access, vol. 9. [10] Information Content-based approach [Schuhmacher and Ponzetto, 2014]. [11] Linked Data Semantic Distance (LDSD) [Passant, 2010]. [12] Wikipedia Link-based Measure (WLM ) [Witten and Milne, 2008]; [13] Linked Open Data Description Overlap-based approach (LODDO) [Zhou et al. 2012] [14] Exclusivity-based [Hulpuş et al 2015] [15] ASRMP [El Vaigh et al. 2020] [16] LDSDGN [Piao and Breslin, 2016]

  10. Data from: Exploratory Spatial Data Approach to Identify the Context of...

    • catalog.data.gov
    • datasets.ai
    • +2more
    Updated Mar 12, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Institute of Justice (2025). Exploratory Spatial Data Approach to Identify the Context of Unemployment-Crime Linkages in Virginia, 1995-2000 [Dataset]. https://catalog.data.gov/dataset/exploratory-spatial-data-approach-to-identify-the-context-of-unemployment-crime-linka-1995-053cc
    Explore at:
    Dataset updated
    Mar 12, 2025
    Dataset provided by
    National Institute of Justicehttp://nij.ojp.gov/
    Description

    This research is an exploration of a spatial approach to identify the contexts of unemployment-crime relationships at the county level. Using Exploratory Spatial Data Analysis (ESDA) techniques, the study explored the relationship between unemployment and property crimes (burglary, larceny, motor vehicle theft, and robbery) in Virginia from 1995 to 2000. Unemployment rates were obtained from the Department of Labor, while crime rates were obtained from the Federal Bureau of Investigation's Uniform Crime Reports. Demographic variables are included, and a resource deprivation scale was created by combining measures of logged median family income, percentage of families living below the poverty line, and percentage of African American residents.

  11. Bighorn Canyon National Recreation Area Landscape Context, Raster Data

    • catalog.data.gov
    Updated Oct 16, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Park Service (2025). Bighorn Canyon National Recreation Area Landscape Context, Raster Data [Dataset]. https://catalog.data.gov/dataset/bighorn-canyon-national-recreation-area-landscape-context-raster-data
    Explore at:
    Dataset updated
    Oct 16, 2025
    Dataset provided by
    National Park Servicehttp://www.nps.gov/
    Description

    This zip file contains 21 raster layers representing data from a variety of landscape metrics used to analyze the landscape context of Bighorn Canyon National Recreation Area (BICA). Their names, descriptions and categorization are as follows: Housing This raster dataset contains sixteen layers named in the format "bhc1950us," with the name of each layer containing a year representing the decades from 1950 through 2100 (ex. bh1960us, bhc1970us, bhc1980us, etc.). The layers depict housing density classes for the area around the 30 km buffer around and including BICA’s managed lands for each decade. These housing density estimates come from a Spatially Explicit Regional Growth Model (SERGoM, Theobald 2005) based on U.S. Census data from 2010 and depict the location and density of private land housing unit classes around BICA. SERGoM methods combined housing data with information on land ownership and density of major roads (interstates, state highways, and county roads) to provide a more accurate allocation of the location of housing units over the landscape. Details on how SERGoM was used for NPS data can be found in the NPScape Standard Operating Procedure (SOP): Housing Measure at https://irma.nps.gov/DataStore/Reference/Profile/2221576 The SERGoM used historical and current housing density patterns as data inputs to develop a simulation model to forecast future housing density patterns based on county-level population projections. Further details about the methodology of SERGoM can be found at https://www.jstor.org/stable/26267722?seq=2 SERGoM_bhc_metrics: Value CLASSNAME 0 Private undeveloped 1 2,470 units / square km 12 Commercial/industrial Land Cover This raster dataset depicts land cover and contains four layers from the National Land Cover Database (NLCD). The names and descriptions of each layer are as follows: NLCD2001. The National Land Cover Database 2001 land cover layer for mapping was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. This is a single layer for 2001 landcover data at all levels. Level 2 data for 2001 can be derived from this layer by collapsing level 1 features into level 2 categories. This level 1 layer contains seventeen classes: Value Land Cover 0 Unknown 11 Open Water 12 Perennial Snow/Ice 21 Developed, Open Space 22 Developed, Low Intensity 23 Developed, Medium Intensity 24 Developed, High Intensity 31 Barren Land 41 Deciduous Forest 42 Evergreen Forest 43 Mixed Forest 52 Shrub/Scrub 71 Herbaceous 81 Hay/Pasture 82 Cultivated Crops 90 Woody Wetlands 95 Emergent Herbaceous Wetlands NLCD2001 land cover class descriptions: Open Water - All areas of open water, generally with less than 25% cover or vegetation or soil. Perennial Ice/Snow - All areas characterized by a perennial cover of ice and/or snow, generally greater than 25% of total cover. Developed, Open Space - Includes areas with a mixture of some constructed materials, but mostly vegetation in the form of lawn grasses. Impervious surfaces account for less than 20 percent of total cover. These areas most commonly include large-lot single-family housing units, parks, golf courses, and vegetation planted in developed settings for recreation, erosion control, or aesthetic purposes. Developed, Low Intensity - Includes areas with a mixture of constructed materials and vegetation. Impervious surfaces account for 20-49 percent of total cover. These areas most commonly include single-family housing units. Developed, Medium Intensity - Includes areas with a mixture of constructed materials and vegetation. Impervious surfaces account for 50-79 percent of the total cover. These areas most commonly include single-family housing units. Developed, High Intensity - Includes highly developed areas where people reside or work in high numbers. Examples include apartment complexes, row houses and commercial/industrial. Impervious surfaces account for 80 to100 percent of the total cover. Barren Land - Rock/Sand/Clay; Barren areas of bedrock, desert pavement, scarps, talus, slides, volcanic material, glacial debris, sand dunes, strip mines, gravel pits and other accumulations of earthen material. Generally, vegetation accounts for less than 15% of total cover. Deciduous Forest - Areas dominated by trees generally greater than 5 meters tall, and greater than 20% of total vegetation cover. More than 75 percent of the tree species shed foliage simultaneously in response to seasonal change. Evergreen Forest - Areas dominated by trees generally greater than 5 meters tall, and greater than 20% of total vegetation cover. More than 75 percent of the tree species maintain their leaves all year. Canopy is never without green foliage. Mixed Forest - Areas dominated by trees generally greater than 5 meters tall, and greater than 20% of total vegetation cover. Neither deciduous nor evergreen species are greater than 75 percent of total tree cover. Shrub/Scrub - Areas dominated by shrubs; less than 5 meters tall with shrub canopy typically greater than 20% of total vegetation. This class includes true shrubs, young trees in an early successional stage or trees stunted from environmental conditions. Herbaceous - Areas dominated by graminoid or herbaceous vegetation, generally greater than 80% of total vegetation. These areas are not subject to intensive management such as tilling but can be utilized for grazing. Hay/Pasture - Areas of grasses, legumes, or grass-legume mixtures planted for livestock grazing or the production of seed or hay crops, typically on a perennial cycle. Pasture/hay vegetation accounts for greater than 20 percent of total vegetation. Cultivated Crops - Areas used for the production of annual crops, such as corn, soybeans, vegetables, tobacco, and cotton, and also perennial woody crops such as orchards and vineyards. Crop vegetation accounts for greater than 20 percent of total vegetation. This class also includes all land being actively tilled. Woody Wetlands - Areas where forest or shrub land vegetation accounts for greater than 20 percent of vegetative cover and the soil or substrate is periodically saturated with or covered with water. Emergent Herbaceous Wetlands - Areas where perennial herbaceous vegetation accounts for greater than 80 percent of vegetative cover and the soil or substrate is periodically saturated with or covered with water. Landcover_NaturalConverted_NLCD2011. This layer depicts natural vs. converted land cover circa 2011 and was extracted from the map package NLCD2011_LNC.mpk. This layer contains two classes: Value Class Name 1 Converted 2 Natural Natural vs. Converted class descriptions: Converted - Developed areas, cultivated crops, and hay/pasture lands. Natural - All other major cover types. Landcover_Level1_NLCD2011. This layer was extracted from the map package NLCD2011_Level1.mpk and contains nine classes: Value Class Name 1 Open Water 2 Developed 3 Barren/Quarries/Transitional 4 Forest 5 Scrubs/Shrub 6 Perennial Ice/Snow 7 Grassland/Herbaceous 8 Agriculture 9 Wetlands Landcover_Level2_NLCD2011. This layer was extracted from the map package NLCD2011_Level2.mpk and contains fifteen classes: Value Class Name 11 Open Water 12 Perennial Ice/Snow 21 Developed, Open Space 22 Developed, Low Intensity 23 Developed, Medium Intensity 24 Developed, High Intensity 31 Barren Land 41 Deciduous Forest 42 Evergreen Forest 43 Mixed Forest 52 Shrub/Scrub 71 Herbaceous 81 Hay/Pasture 82 Cultivated Crops 90 Woody Wetlands 95 Emergent Herbaceous Wetlands Road Density - Road_Density. This raster layer depicts road density (km/km2) calculated for all roads in and around the study area (30 km buffer around BICA) as of 2005. This layer was extracted from the map package AllRoads_rdd.mpk. The map packages mentioned above can be found in the DataStore reference: Bighorn Canyon National Recreation Area Landscape Context, Map Packages. National Park Service. https://irma.nps.gov/DataStore/Reference/Profile/2306146>https://irma.nps.gov/DataStore/Reference/Profile/2306146

  12. Synthetic Jane Street Dataset

    • kaggle.com
    zip
    Updated Feb 4, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Christoffer Karlsson (2021). Synthetic Jane Street Dataset [Dataset]. https://www.kaggle.com/christoffer/synthetic-jane-street-dataset
    Explore at:
    zip(2823427381 bytes)Available download formats
    Dataset updated
    Feb 4, 2021
    Authors
    Christoffer Karlsson
    Description

    Context

    Synthetic data can be useful for all kinds of model validation purposes. This dataset should act as a drop-in replacement for the original dataset allowing you to compare your model's perfomance on random data with its performance on real data.

    Content

    This dataset has the same structure as the original dataset, but all features and weights have been randomized by sampling (with replacement) from the original features. The target resps have been randomized togheter, as if resp and the resp_i were a single feature.

    Acknowledgements

    This dataset is based on competition data from the Jane Street Market Prediction competition.

    Inspiration

    Check if your model performs as it should and that you don't have any leaks in your cross validation.

  13. d

    Data for: The hippocampal representation of context is preserved despite...

    • datadryad.org
    • zenodo.org
    zip
    Updated Apr 5, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alexandra Keinath (2022). Data for: The hippocampal representation of context is preserved despite neural drift [Dataset]. http://doi.org/10.5061/dryad.2z34tmpp9
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 5, 2022
    Dataset provided by
    Dryad
    Authors
    Alexandra Keinath
    Time period covered
    Mar 24, 2022
    Description

    The hippocampus is thought to mediate episodic memory through the instantiation and reinstatement of context-specific cognitive maps. However, recent longitudinal experiments have challenged this view, reporting that most hippocampal cells change their tuning properties over days even in the same environment. Often referred to as neural or representational drift, these dynamics raise questions about the capacity and content of the hippocampal code. One such question is whether and how these long-term dynamics impact the hippocampal code for context. To address this, we imaged large CA1 populations over more than a month of daily experience as freely behaving mice participated in an extended geometric morph paradigm. We find that long-timescale changes in population activity occurred orthogonally to the representation of context in network space, allowing for consistent readout of contextual information across weeks. This population-level structure was supported by heterogeneous patterns...

  14. Z

    Data from: Context-dependent selectivity to natural scenes in the retina

    • data.niaid.nih.gov
    Updated Apr 12, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Matías A. Goldin; Baptiste Lefebvre; Samuele Virgili (2023). Context-dependent selectivity to natural scenes in the retina [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_6868361
    Explore at:
    Dataset updated
    Apr 12, 2023
    Dataset provided by
    Institut de la vision, Sorbonne Universite
    Authors
    Matías A. Goldin; Baptiste Lefebvre; Samuele Virgili
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Here a sample dataset to replicate the study published in https://www.biorxiv.org/content/10.1101/2021.10.01.462157v1

  15. Summary of the 10 cases of the data that we consider.

    • plos.figshare.com
    xls
    Updated Jun 3, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zhengwen Shen; Huafeng Wang; Weiwen Xi; Xiaogang Deng; Jin Chen; Yu Zhang (2023). Summary of the 10 cases of the data that we consider. [Dataset]. http://doi.org/10.1371/journal.pone.0178411.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 3, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Zhengwen Shen; Huafeng Wang; Weiwen Xi; Xiaogang Deng; Jin Chen; Yu Zhang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Summary of the 10 cases of the data that we consider.

  16. C

    Area Plans Context First PTRC

    • ckan.mobidatalab.eu
    wfs, wms
    Updated Apr 29, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    GeoDatiGovIt RNDT (2023). Area Plans Context First PTRC [Dataset]. https://ckan.mobidatalab.eu/dataset/plans-of-area-context-first-ptrc
    Explore at:
    wms, wfsAvailable download formats
    Dataset updated
    Apr 29, 2023
    Dataset provided by
    GeoDatiGovIt RNDT
    Description

    First PTRC context area plans, present in table 8 of the PTRC approved in 1992

  17. o

    Data and Code for: Search Costs and Context Effects

    • openicpsr.org
    delimited
    Updated Jul 15, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Heiko Karle; Florian Kerzenmacher; Heiner Schumacher; Frank Verboven (2024). Data and Code for: Search Costs and Context Effects [Dataset]. http://doi.org/10.3886/E207961V1
    Explore at:
    delimitedAvailable download formats
    Dataset updated
    Jul 15, 2024
    Dataset provided by
    American Economic Association
    Authors
    Heiko Karle; Florian Kerzenmacher; Heiner Schumacher; Frank Verboven
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Empirical search cost estimates are often large and increasing in the size of the transaction. We conduct an online search experiment in which we manipulate the price scale while keeping the physical search effort per price quote constant. Additionally, we obtain a direct measure of subjects’ opportunity costs of time. Using a standard search model, we confirm that search cost estimates are large and increasing in the price scale. We then modify the model to incorporate context effects with respect to prices. This results in search cost estimates that are scale-independent and correspond well to subjects’ opportunity costs of time.

  18. o

    NASA / USGS Mars Reconnaissance Orbiter (MRO) Context Camera (CTX) Targeted...

    • registry.opendata.aws
    Updated Apr 21, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NASA (2023). NASA / USGS Mars Reconnaissance Orbiter (MRO) Context Camera (CTX) Targeted DTMs [Dataset]. https://registry.opendata.aws/nasa-usgs-controlled-mro-ctx-dtms/
    Explore at:
    Dataset updated
    Apr 21, 2023
    Dataset provided by
    <a href="https://www.nasa.gov">NASA</a>
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    As of March, 2023 the Mars Reconnaissance Orbiter (MRO) High Resolution Science Experiment (HiRISE) sensor has collected more than 5000 targeted stereopairs. During HiRISE acquisition, the Context Camera (CTX) also collects lower resolution, higher spatial extent context images. These CTX acquisitions are also targeted stereopairs. This data set contains targeted CTX DTMs and orthoimages, created using the NASA Ames Stereopipeline. These data have been created using relatively controlled CTX images that have been globally bundle adjusted using the USGS Integrated System for Imagers and Spectrometers (ISIS) jigsaw application. Relative control at global scale reduces common issues such as spacecraft jitter in the resulting DTMs. DTMs were aligned as part of 26 different groupings to the ultimate MOLA product using an iterative pc_align approach. Therefore, all DTMs and orthoimages are absolutely controlled to MOLA, a proxy product for the Mars geodetic coordinate reference frame.

  19. I

    Data from: Second-generation citation context analysis (2010-2019) to...

    • databank.illinois.edu
    • aws-databank-alb.library.illinois.edu
    Updated Sep 2, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jodi Schneider; Di Ye; Alison Hill (2020). Second-generation citation context analysis (2010-2019) to retracted paper Matsuyama 2005 [Dataset]. http://doi.org/10.13012/B2IDB-3331845_V2
    Explore at:
    Dataset updated
    Sep 2, 2020
    Authors
    Jodi Schneider; Di Ye; Alison Hill
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Dataset funded by
    Alfred P. Sloan Foundation
    Description

    Citation context annotation. This dataset is a second version (V2) and part of the supplemental data for Jodi Schneider, Di Ye, Alison Hill, and Ashley Whitehorn. (2020) "Continued post-retraction citation of a fraudulent clinical trial report, eleven years after it was retracted for falsifying data". Scientometrics. In press, DOI: 10.1007/s11192-020-03631-1 Publications were selected by examining all citations to the retracted paper Matsuyama 2005, and selecting the 35 citing papers, published 2010 to 2019, which do not mention the retraction, but which mention the methods or results of the retracted paper (called "specific" in Ye, Di; Hill, Alison; Whitehorn (Fulton), Ashley; Schneider, Jodi (2020): Citation context annotation for new and newly found citations (2006-2019) to retracted paper Matsuyama 2005. University of Illinois at Urbana-Champaign. https://doi.org/10.13012/B2IDB-8150563_V1 ). The annotated citations are second-generation citations to the retracted paper Matsuyama 2005 (RETRACTED: Matsuyama W, Mitsuyama H, Watanabe M, Oonakahara KI, Higashimoto I, Osame M, Arimura K. Effects of omega-3 polyunsaturated fatty acids on inflammatory markers in COPD. Chest. 2005 Dec 1;128(6):3817-27.), retracted in 2008 (Retraction in: Chest (2008) 134:4 (893) https://doi.org/10.1016/S0012-3692(08)60339-6). OVERALL DATA for VERSION 2 (V2) FILES/FILE FORMATS Same data in two formats: 2010-2019 SG to specific not mentioned FG.csv - Unicode CSV (preservation format only) - same as in V1 2010-2019 SG to specific not mentioned FG.xlsx - Excel workbook (preferred format) - same as in V1 Additional files in V2: 2G-possible-misinformation-analyzed.csv - Unicode CSV (preservation format only) 2G-possible-misinformation-analyzed.xlsx - Excel workbook (preferred format) ABBREVIATIONS: 2G - Refers to the second-generation of Matsuyama FG - Refers to the direct citation of Matsuyama (the one the second-generation item cites) COLUMN HEADER EXPLANATIONS File name: 2G-possible-misinformation-analyzed. Other column headers in this file have same meaning as explained in V1. The following are additional header explanations: Quote Number - The order of the quote (citation context citing the first generation article given in "FG in bibliography") in the second generation article (given in "2G article") Quote - The text of the quote (citation context citing the first generation article given in "FG in bibliography") in the second generation article (given in "2G article") Translated Quote - English translation of "Quote", automatically translation from Google Scholar Seriousness/Risk - Our assessment of the risk of misinformation and its seriousness 2G topic - Our assessment of the topic of the cited article (the second generation article given in "2G article") 2G section - The section of the citing article (the second generation article given in "2G article") in which the cited article(the first generation article given in "FG in bibliography") was found FG in bib type - The type of article (e.g., review article), referring to the cited article (the first generation article given in "FG in bibliography") FG in bib topic - Our assessment of the topic of the cited article (the first generation article given in "FG in bibliography") FG in bib section - The section of the cited article (the first generation article given in "FG in bibliography") in which the Matsuyama retracted paper was cited

  20. Dataset Individual Data Figure 6 - Context-Dependent Preferences in...

    • figshare.com
    xlsx
    Updated May 30, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Marco Vasconcelos; Tiago Monteiro; Alex Kacelnik (2023). Dataset Individual Data Figure 6 - Context-Dependent Preferences in Starlings [Dataset]. http://doi.org/10.6084/m9.figshare.1551383.v2
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    Figsharehttp://figshare.com/
    figshare
    Authors
    Marco Vasconcelos; Tiago Monteiro; Alex Kacelnik
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Individual data used in Figure 6.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Levent Atici; Sarah W. Kansa; Justin SE. Lev-Tov (2022). Context [Dataset]. https://opencontext.org/predicates/dc501639-5b33-4305-7d85-6a48502f732b

Context

Explore at:
Dataset updated
Sep 30, 2022
Dataset provided by
Open Context
Authors
Levent Atici; Sarah W. Kansa; Justin SE. Lev-Tov
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

An Open Context "predicates" dataset item. Open Context publishes structured data as granular, URL identified Web resources. This "Variables" record is part of the "Chogha Mish Fauna" data publication.

Search
Clear search
Close search
Google apps
Main menu