MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Dimensions is the largest database of research insight in the world. It represents the most comprehensive collection of linked data related to the global research and innovation ecosystem available in a single platform. Because Dimensions maps the entire research lifecycle, you can follow academic and industry research from early stage funding, through to output and on to social and economic impact. Businesses, governments, universities, investors, funders and researchers around the world use Dimensions to inform their research strategy and make evidence-based decisions on the R&D and innovation landscape. With Dimensions on Google BigQuery, you can seamlessly combine Dimensions data with your own private and external datasets; integrate with Business Intelligence and data visualization tools; and analyze billions of data points in seconds to create the actionable insights your organization needs. Examples of usage: Competitive intelligence Horizon-scanning & emerging trends Innovation landscape mapping Academic & industry partnerships and collaboration networks Key Opinion Leader (KOL) identification Recruitment & talent Performance & benchmarking Tracking funding dollar flows and citation patterns Literature gap analysis Marketing and communication strategy Social and economic impact of research About the data: Dimensions is updated daily and constantly growing. It contains over 112m linked research publications, 1.3bn+ citations, 5.6m+ grants worth $1.7trillion+ in funding, 41m+ patents, 600k+ clinical trials, 100k+ organizations, 65m+ disambiguated researchers and more. The data is normalized, linked, and ready for analysis. Dimensions is available as a subscription offering. For more information, please visit www.dimensions.ai/bigquery and a member of our team will be in touch shortly. If you would like to try our data for free, please select "try sample" to see our openly available Covid-19 data.Learn more
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This dataset from Dimensions.ai contains all published articles, preprints, clinical trials, grants and research datasets that are related to COVID-19. This growing collection of research information now amounts to hundreds of thousands of items, and it is the only dataset of its kind. You can find an overview of the content in this interactive Data Studio dashboard: https://reports.dimensions.ai/covid-19/ The full metadata includes the researchers and organizations involved in the research, as well as abstracts, open access status, research categories and much more. You may wish to use the Dimensions web application to explore the dataset: https://covid-19.dimensions.ai/. This dataset is for researchers, universities, pharmaceutical & biotech companies, politicians, clinicians, journalists, and anyone else who wishes to explore the impact of the current COVID-19 pandemic. It is updated daily, and free for anyone to access. Please share this information with anyone you think would benefit from it. If you have any suggestions as to how we can improve our search terms to maximise the volume of research related to COVID-19, please contact us at support@dimensions.ai. About Dimensions: Dimensions is the largest database of research insight in the world. It contains a comprehensive collection of linked data related to the global research and innovation ecosystem, all in a single platform. This includes hundreds of millions of publications, preprints, grants, patents, clinical trials, datasets, researchers and organizations. Because Dimensions maps the entire research lifecycle, you can follow academic and industry research from early stage funding, through to output and on to social and economic impact. This Covid-19 dataset is a subset of the full database. The full Dimensions database is also available on BigQuery, via subscription. Please visit www.dimensions.ai/bigquery to gain access.Weitere Informationen
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
Head related impulse response measurements with the KEMAR dummy head performed in an anechoic chamber with a resolution of 1°. The impulse responses are provided for different distances and are accompanied by headphone compensation filters.
For details have a look at README.md.
The same measurement can be downloaded as MAT files at https://doi.org/10.5281/zenodo.4459911
This dataset is further described in (see the PDF file)
H. Wierstorf, M. Geier, A. Raake, S. Spors - A Free Database of Head-Related Impulse Response Measurements in the Horizontal Plane with Multiple Distances. In 130th AES Conv. 2011, eBrief 6.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This material is part of the free Environmental Performance in Construction (EPiC) Database. The EPiC Database contains embodied environmental flow coefficients for 250+ construction materials using a comprehensive hybrid life cycle inventory approach.Dimension stone is the common term used for finished blocks or slabs of stone used in construction. There are a variety of rock types used to create dimension stone, including: marble, granite, slate, travertine and others. These have different properties, and can vary in strength, hardness, durability, texture, colour, size and cost.Dimension stone is mined from quarries, using precision saws, burners and blasting. Slabs are then graded, cut to size, and finished using a variety of techniques, including: sandblasting, polishing, honing, and saw cutting; each technique providing a different finish and texture. Resin can be used to fill imperfections in the stone. Dimension stones are commonly used for bathroom vanities, countertops, flooring and cladding. Granite is used for external and flooring applications due to its hardness, and ability to withstand weathering. Marble and travertine is commonly used for benchtops and interior applications.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Polymer-induced heteronucleation was utilized for the selective crystallization of the color polymorphic platinum complexes Pt(bpy)Cl2 and Pt(phen)Cl2. Crystal structures of two polymorphs of Pt(phen)Cl2 were determined and reveal that, as in the case of Pt(bpy)Cl2, this compound has one form with Pt···Pt interactions (orange crystals) and another lacking these contacts (yellow crystals). Free energy measurements reveal that the polymorphs of Pt(bpy)Cl2 and Pt(phen)Cl2 without Pt···Pt interactions are more stable in both cases by 0.67(2) and 0.53(1) kJ/mol, respectively, and this finding is consistent with the principle of close packing. Furthermore, a search of the Cambridge Structural Database reveals that, for polymorphic platinum complexes, shorter intermolecular Pt···Pt interactions generally result in less dense structures.
By Andy R. Terrel [source]
This survey utilizes the cutting-edge three-dimensional (3-D) surface anthropometry technology, which measures the outermost surface of the human body. These technologies are a breakthrough in measuring capabilities, as they can accurately record hundreds of thousands of points in three dimensions in only a few seconds. With this data, designers and engineers are able to use computer-aided design tools and rapid prototyping in conjunction with more realistic postures to create better designs for their target audience more effectively.
Surface anthropometry has many advantages over traditional measuring methods like rulers and tape measures: it helps reduce guesswork through its accuracy; it allows measurements to be taken long after a subject has left; it provides an efficient way to capture individuals while wearing clothing, equipment or any other accessories; each measurement is comparable with those collected by other groups regardless of who took them; and lastly, the system is non-contact so there’s no risk for discrepancies between different measurers.
Our survey will look at 3 dimensional body measurements such demographics like age, gender, reported height and weight as well as individual body parts such waist circumference preferred braid size cup size ankle circumference scye circumference chest circumferences hip height spine elbow length arm part lengths should get out seams sleeveinseam biacromial breadth bicristal breadth bustbusters cervical height chest – els interscye distance acromion Hight acromion radial length axilla heights elbow heights knee heights radial mation length hand late neck circumstance based these 3 dimes entails taken from our dataset Caesarz dot csv make sure you provide us with all the necessary information thank you
For more datasets, click here.
- 🚨 Your notebook can be here! 🚨!
This dataset is provided to help researchers, designers, engineers and other professionals in related fields use 3-D surface anthropometry technology to effectively measure the outer surface of the human body.
Using this dataset can enable you to capture hundreds of thousands of points in three-dimensions on the human body surface. This data provides insights into sizing, fitting and proportions of a range of different body shapes and sizes which can be incredibly useful for many purposes like fashion design or biomedical research.
To get started with this dataset it is helpful to become familiar with some basic terminology such as biacromial breadth (the distance between furthest points on left and right shoulder), bicristal breadth (waist width measurement) , kneem height (the vertical distance from hip joint center to kneecap), ankle circumference (measurement taken at ankle joint) etc. Knowing these measurements can help you better interpret and utilize the data provided in this survey.
Next up, you’ll want familiarise yourself with the various measurements given for each column in this dataset including: age (Integer) , num_children (Integer) , gender (String) , reported_height (Float) , reported_weight (Float) . & more Once ready dive into the data by downloading it into your chosen analysis tool - popular options including KNIME or R Studio! You’ll be able to explore correlations between size & shape metrics as well as discovering patterns between participants based on gender/age etc. Spend some time getting comfortable playing around with your chosen system & just keep exploring interesting connections! Finally if there's a specific use case you have don't forget that user-defined variables are also possible - so create variables when needed! Thanks so much for taking part in our survey & we wish you all best luck analyzing the data - we hope it's useful!
- Developing web-based applications or online platforms for measuring body dimensions using 3D technology for custom clothing and equipment.
- Establishing anthropometric databases, allowing user to easily find measurements of all kinds of body shapes and sizes;
- Analyzing patterns between anthropometric measurements and clinical data such as BMI (body mass index) to benefit the understanding of human health status and nutrition needs
If you use this dataset in your research, please credit the original authors. Data Source
**License: [Dataset copyright by authors](http...
ChemIDplus is a free, web-based search system that provides access to structure and nomenclature authority files used for the identification of chemical substances cited in National Library of Medicine (NLM) databases. ChemIDplus also provides structure searching and direct links to many biomedical resources at NLM and on the Internet for chemicals of interest. The database contains over 350,000 chemical records, of which over 80,000 include chemical structures, and is searchable by Name, Synonym, CAS Registry Number, Molecular Formula, Classification Code, Locator Code, and Structure.
The Utility Rate Database (URDB) is a free storehouse of rate structure information from utilities in the United States. Here, you can search for your utilities and rates to find out exactly how you are charged for your electric energy usage. Understanding this information can help reduce your bill, for example, by running your appliances during off-peak hours (times during the day when electricity prices are less expensive) and help you make more informed decisions regarding your energy usage.
Rates are also extremely important to the energy analysis community for accurately determining the value and economics of distributed generation such as solar and wind power. In the past, collecting rates has been an effort duplicated across many institutions. Rate collection can be tedious and slow, however, with the introduction of the URDB, OpenEI aims to change how analysis of rates is performed. The URDB allows anyone to access these rates in a computer-readable format for use in their tools and models. OpenEI provides an API for software to automatically download the appropriate rates, thereby allowing detailed economic analysis to be done without ever having to directly handle complex rate structures. Essentially, rate collection and processing that used to take weeks or months can now be done in seconds!
NREL’s System Advisor Model (formerly Solar Advisor Model or SAM), currently has the ability to communicate with the OpenEI URDB over the internet. SAM can download any rate from the URDB directly into the program, thereby enabling users to conduct detailed studies on various power systems ranging in size from a small residential rooftop solar system to large utility scale installations. Other applications available at NREL, such as OpenPV and IMBY, will also utilize the URDB data.
Upcoming features include better support for entering net metering parameters, maps to summarize the data, geolocation capabilities, and hundreds of additional rates!
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Longitudinal behavior of Altmetrics in Orthodontic research: Analysis of the orthodontic journals indexed in the journal citation reports from 2014 to 2018 A first search was carried out, in December 2019, in the inCites JCR database to select orthodontic journals that were included in the category of dentistry, oral surgery, and medicine of the JCR during the period from 2014 to 2018. The online interest generated by the orthodontic research outputs, was observed and tracked through the Dimensions free app https://app.dimensions.ai/discover/publication in the Dimensions database. The search was limited to the nine journals listed in the JCR in 2018, which were the American Journal of Orthodontics & Dentofacial Orthopedics (AJODO), The Angle Orthodontist, The European Journal of Orthodontics (EJO), Progress in Orthodontics, Korean Journal of Orthodontics (KJO), Orthodontics & Craniofacial Research (OCR), Journal of Orofacial Orthopedics/Fortschritte der Kieferorthopädie, Seminars in Orthodontics, and the Australian Orthodontic Journal. The Dimension App was used to carry out the search and the following filters were applied: publication year (2018 or 2017 or 2016 or 2015 or 2014); source title (American Journal of Orthodontics & Dentofacial Orthopedics OR The European Journal of Orthodontics OR The Angle Orthodontist OR Korean Journal of Orthodontics OR Orthodontics & Craniofacial Research OR Journal of Orofacial Orthopedics/Fortschritte der Kieferorthopädie OR Progress in Orthodontics OR Seminars in Orthodontics OR the Australian Orthodontic Journal). Data were exported to an Excel data sheet (Microsoft Office for Mac version 16.43). In December 2021 a second search was performed on the Dimension Web app by the members of the research team introducing the DOI or the article title of the 3678 items included in the 2019 sample. Here are presented the data related to the 3678 analysed Items divided per journal, the number of altmetrics mentions is presented for each item at both time intervals as well as their change over the studied period.
Approximately 14.2 million measurements of surface water pCO2 made over the global oceans during 1957-2019 have been processed to make a uniform data file in this Version 2019. Measurements made in open oceans as well as in coastal waters are included. The data assembled include only those measured using equilibrator-CO2 analyzer systems, and have been quality-controlled based upon the stability of the system performance, the reliability of calibrations for CO2 analysis and the internal consistency of data. We have added 567,632 data points comprised of 158 leg/cruise segments in this version. All of these were collected on the 4 ships in our current field program. These 4 ships operate primarily in high latitudes in both hemispheres and have built decades long records in these areas. R/V Nathaniel B. Palmer’s system has been operating since 1995, R/V Laurence M. Gould’s system since 2001, USCGC Healy since 2011, R/V M. Langseth since 2010 (terminated in 2018), and R/V Sikuliaq since 2015. Our contribution to this database through many years of 3.31 million records is primarily for the polar and sub-polar seas. These underway data have been quality controlled and corrected for the time lag and temperature differences between the water intake and pCO2 measurements. In order to allow re-examination of the data in the future, a number of measured parameters relevant to pCO2 in seawater are listed. The overall uncertainty for the pCO2 values listed is estimated to be ± 2.5 uatm on the average. The names and institutional affiliations of the contributors are listed in Table 1. The documentation for the previous versions (V1.0, V2007, V2008, V2009, V2010, V2011, V2012, V2013, V2014, V2015, V2016, V2017, and V2018) of our database are available at NCEI via Ocean Carbon data System (OCADS) LDEO Database web page. The global pCO2 dataset is available free of charge as a numeric data package (NDP) from the OCADS: https://www.ncei.noaa.gov/access/ocean-carbon-data-system/oceans/LDEO_Underway_Database/. The NDP consists of the oceanographic data files and this printed documentation, which describes the procedures and methods used to obtain the data.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset focuses on chamber-based methane (CH4) flux measurements in tidal wetlands across the Contiguous United States (CONUS)and is intended to serve as a community resource for Earth and environmental science research, climate change synthesis studies, and model evaluation. The database contains 35 contributed datasets with a total of 10,445 chamber-based CH4 flux observations across 41 years and 120 sites distributed across CONUS Atlantic and Pacific coasts and the Gulf of Mexico. Contributed datasets are converted to a standard format and units and organized hierarchically (site, chamber, chamber time series, porewater chemistry, and plant species) with metadata on contributors, geographic location, measurement conditions, and ancillary environmental variables. While focused on CH4 flux measurements, the database accommodates other greenhouse gas flux data (CO2 and N2O) as well as porewater profiles of various analytes, experimental treatments (e.g., fertilization, elevated CO2), and ecosystem disturbance classes (e.g., salinization, tidal restrictions, restoration). This database results from the Coastal Carbon Network’s (CCN) tidal wetland CH4 flux data synthesis. A description and analysis of the dataset are available in Arias-Ortiz et al. 2024, co-authored by members of the CCN Data Methane Working Group and data contributors.
Open Government Licence - Canada 2.0https://open.canada.ca/en/open-government-licence-canada
License information was derived automatically
The CVS Database provides a catalogue of original vehicle dimensions, for use in vehicle safety research and collision investigation. The purpose of this database is to provide users with a comprehensive listing of vehicle dimensions commonly used in the field of collision investigation and reconstruction, for the North American fleet of passenger cars, light trucks, vans and SUV’s. The database includes model years dating back to 2011 and is comprised of both commonly available dimensions such as overall length, wheelbase and track widths, and also several dimensions which are not typically readily available from the manufacturers, nor from automotive publications. Note – To obtain database of model years dating back to 1971, please contact Transport Canada.
In the 1960s, thermonuclear bomb tests released significant pulses of radioactive 14C into the atmosphere. This major perturbation allowed scientists to study the dynamics of the global carbon cycle by measuring and observing rates of isotopic exchange. The Radiological Dating Laboratory at the Norwegian Institute of Technology performed 14C measurements in atmospheric CO2 from 1962 to 1993 at a network of ground stations in the Northern and Southern hemispheres. These measurements were supplemented during 1965 with high-altitude (9-12.6 km) air samples collected using aircraft from the Norwegian Air Force. The resulting database, coupled with other 14C data sets, provides a greater understanding of the dynamic carbon reservoir and a crude picture of anomalous sources and sinks at different geographical latitudes. This database is outstanding for its inclusion of early 14C measurements, broad spatial coverage of sampling, consistency of sampling method, and 14C calculation results corrected for isotopic fractionation and radioactive decay. This database replaces previous versions published by the authors and the Radiological Dating Laboratory.Fourteen stations spanning latitudes from Spitsbergen (78° N) to Madagascar (21° S) were used for sampling during the lifetime of the Norwegian program. Some of the stations have data for only a brief period, while others have measurements through 1993. Sampling stations subject to local industrial CO2 contamination were avoided. The sites have sufficient separation to describe the latitudinal distribution of 14C in atmospheric models. The sampling procedure for all the surface (10-2400 m asl) 14C measurements in this database consisted of quantitative absorption of atmospheric CO2 in carbonate-free 0.5 N NaOH solution. The 14C measurements were made in a CO2 proportional counter and calculated (14C) as per mil excess above the normal 14C level defined by the US National Institute of Standards and Technology (NIST). Atmospheric 14C content is finally expressed as 14C, which is the relative deviation of the measured 14C activity from the NIST oxalic acid standard activity, after correction for isotopic fractionation and radioactive decay related to age. The data are organized by sampling station, and each record of the database contains the sampling dates; values for 14C excess (14C) relative to the NIST standard, fractionation 13C (13C) relative to the Pee Dee Belemnite (PDB) standard, and corrected 14C ( 14C) excess; and the standard deviation for 14C. The 14C calculation results presented here are thus corrected for isotopic fractionation and radioactive decay, and constitute the final product of a research effort that has spanned three decades.The 14C station data show a sharp increase in tropospheric radiocarbon levels in the early 1960s and then a decline after the majority of nuclear tests came to an end on August 5, 1963 (Test Ban Treaty). The sharp peaks in tropospheric radiocarbon in the early 1960s are more pronounced in the Northern Hemisphere, reflecting the location of most atomic weapons tests. The measurements show large seasonal variations in the 14C level during the early 1960s mainly as a result of springtime transport of bomb 14C from the stratosphere. During the 1970s, the seasonal variations are smaller and due partly to seasonal variations in CO2 from fossil-fuel emissions. The rate of decrease of atmospheric radiocarbon provides a check on the exchange constants of the atmosphere and ocean.This report and all data it describes are available from the Carbon Dioxide Information Analysis Center (CDIAC) without charge. The Nydal and Lövseth atmospheric 14C database comprises 21 data files totaling 0.2 megabytes in size. The following report describes the sampling methods and analysis. In addition, the report includes a complete discussion of CDIAC's data-processing efforts, the contents and format of the data files, and a reprint of a Nydal and Lövseth journal article.For access to the data files, click this link to the CDIAC data transition website: http://cdiac.ess-dive.lbl.gov/ndps/ndp057.html
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This material is part of the free Environmental Performance in Construction (EPiC) Database. The EPiC Database contains embodied environmental flow coefficients for 250+ construction materials using a comprehensive hybrid life cycle inventory approach.Cross laminated timber (CLT) is a manufactured timber product, similar to plywood. Solid timber members are bonded together, with the grain alternating by 90 degrees for each lamination. CLT is much thicker than traditional plywood and has superior structural capabilities. It has excellent dimensional stability, strength and rigidity. CLT is fabricated using a range of different timber varieties. It is typically bonded together using melamine urea formaldehyde, polyurethane or other adhesives. CLT has different structural capabilities when compared with conventional timber, and acts as a sheet product, rather than a framing product. It can be used as a complete floor, wall or roof system, without the need for additional supporting members.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Hamme et al. (2019) Global noble gas and N2/Ar database, version 1.0. These data are a compilation of dissolved noble gas and N2/Ar ratio measurements collected from 1998-2016 in locations spanning the globe.
This database contains the data on dissolved gas measurements published in:
Hamme, R. C., Nicholson, D. P., Jenkins, W. J., & Emerson, S. R. (2019). Using Noble Gases to Assess the Ocean’s Carbon Pumps. Annual Review of Marine Science, 11(1), 75–103. doi:10.1146/annurev-marine-121916-063604
Data Originators: Roberta Hamme, William Jenkins, Steven Emerson, David Nicholson, Rachel Stanley
Date contributed to BCO-DMO: 17 January 2022
Version 2.0 corrects an incorrect sign in the longitude for cruise 33KI20040814:HOT162 in version 1.0. The error in the database does not affect any figures in the publication.
This data is provided free for educational and non-profit research purposes. We ask that you appropriately cite Hamme et al. (2019) Annual Review of Marine Science in any work that uses this database. Please also send an e-mail to rhamme@uvic.ca, letting her know that you have downloaded the data, so that she can keep you apprised of any further corrections or changes. If you discover what you believe to be an error in the database, it is your responsibility to send an e-mail to me at rhamme@uvic.ca before using the data in a publication.
Both MATLAB .mat databases and comma-delimited .csv text files were provided to BCO-DMO. For the flat, ASCII version (csv) use the "Get Data" button on the BCO-DMO metadata landing page. For convenience, the MATLAB file is also provided as a Supplemental File: Global_Hammeetal2019.mat (400 kb)
These two formats contain identical information. Different cruises can be identified by the sequence number, cruisename, or date.
Secondary data - On some cruises, Ar concentration and N2/Ar ratio measurements were performed at two different labs on separate samples, for inter-calibration purposes. In these cases, data from both labs is given separately with data from the second lab labeled "secondary".
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The peer-reviewed paper associated with this dataset has now been published in Scientific Data, and can be accessed here: https://www.nature.com/articles/sdata201820. Please cite this when using the dataset.
Open clinical trial data provide a valuable opportunity for researchers worldwide to assess new hypotheses, validate published results, and collaborate for scientific advances in medical research. Here, we present a health dataset for the non-invasive detection of cardiovascular disease (CVD), containing 657 data records from 219 subjects. The dataset covers an age range of 20–89 years and records of diseases including hypertension and diabetes. Data acquisition was carried out under the control of standard experimental conditions and specifications. This dataset can be used to carry out the study of photoplethysmograph (PPG) signal quality evaluation and to explore the intrinsic relationship between the PPG waveform and cardiovascular disease to discover and evaluate latent characteristic information contained in PPG signals. These data can also be used to study early and noninvasive screening of common CVD such as hypertension and other related CVD diseases such as diabetes.
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
This dataset contain word2vec embedding in .txt format with 50 dimension that trained on wikipedia database dump that publicly available. There are several data preprocessing done to remove wikipedia formatting like tables, citation, reference, etc. Trained using 150 M tokens (limited by author available ram 😅 . Available in 50d and will be expanded to other dimensions. Glove might be added in the future dates
One method to use it is to use the code below: ```py embeddings_dictionary = {} with open('/kaggle/input/word2vec.txt') as fp: for line in fp.readlines(): records = line.split() word = records[0] vector_dimensions = np.asarray(records[1:], dtype='float32') embeddings_dictionary[word] = vector_dimensions
vocab_length = 100 # modify embedding_dim = 50
embedding_matrix = np.zeros((vocab_length, embedding_dim))
for word, index in word_tokenizer.word_index.items(): # change depending on your data embedding_vector = embeddings_dictionary.get(word) if embedding_vector is not None: embedding_matrix[index] = embedding_vector ```
This dataset might be suitable to do certain NLP task in Indonesia, by including the embedding in deep learning layer
If you find something that's totally out of the place, just feel free to commend in this dataset. Source: Wikipedia Database Dump
Land cover has been interpreted from Satellite images and field checked, other information has been digitized from topographic maps
Members informations:
Attached Vector(s):
MemberID: 1
Vector Name: Land use
Source Map Name: SPOT Pan
Source Map Scale: 50000
Source Map Date: 1989/90
Projection: Polyconic on Modified Everest Ellipsoid
Feature_type: polygon
Vector
Land use maps, interpreted from SPOT panchromatic imagery and field
checked (18 classes)
Members informations:
Attached Vector(s):
MemberID: 2
Vector Name: Administrative boundaries
Source Map Name: topo sheets
Source Map Scale: 50000
Source Map Date: ?
Feature_type: polygon
Vector
Dzongkhags (Districts) and Gewogs
Members informations:
Attached Vector(s):
MemberID: 3
Vector Name: Roads
Source Map Name: topo sheets
Source Map Scale: 50000
Source Map Date: ?
Feature_type: lines
Vector
Road network
Attached Report(s)
Member ID: 4
Report Name: Atlas of Bhutan
Report Authors: Land use planning section
Report Publisher: Ministry of Agriculture, Thimpu
Report Date: 1997-06-01
Report
Land cover (1:250000) and area statistics of 20 Dzongkhags
https://borealisdata.ca/api/datasets/:persistentId/versions/1.2/customlicense?persistentId=doi:10.5683/SP2/E7Z09Bhttps://borealisdata.ca/api/datasets/:persistentId/versions/1.2/customlicense?persistentId=doi:10.5683/SP2/E7Z09B
Assembled from 196 references, this database records a total of 3,861 cases of historical dam failures around the world and represents the largest compilation of dam failures recorded to date (17-02-2020). Indeed, in this database is recorded historical dam failure regardless of the type of dams (e.g. man-made dam, tailing dam, temporary dam, natural dam, etc.), either the type of structure (e.g. concrete dam, embankment dam, etc.), the type of failure (e.g. pipping failure, overtopping failure, etc.) or the properties of the dams (e.g. dam height, reservoir capacity, etc.). Through this process, a total of 45 variables (i.e. which composed the “dataset”, obtained) have been used (when possible/available and relevant) to record various information about the failure (e.g. dam descriptions, dam properties, breach dimensions, etc.). Coupled with the Excel’s functionalities (e.g. adapted from Excel 2016; customizable screen visualization, individual search of specific cases, data filter, pivot table, etc.), the database file can easily be adapted to the needs of the user (i.e. research field, dam type, dam failure type, etc.) and is considered as a door opening in various fields of research (e.g. such as hydrology, hydraulics and dam safety). Also, notice that the dataset proposed allows any user to optimize the verification process, to identify duplicates and to put back in context the historical dam failures recorded. Overall, this investigation work has aimed to standardize data collection of historical dam failures and to facilitate the international collection by setting guidelines. Indeed, the sharing method (i.e. provided through this link) not only represents a considerable asset for a wide audience (e.g. researchers, dams’ owner, etc.) but, furthermore, allows paving the way for the field of dam safety in the actual era of "Big Data". Updated versions will be deposited (at this DOI) at undetermined frequencies in order to update the data recorded over the years. Cette base de données, compile un total de 3 861 cas de rupture de barrages à travers le monde, soit la plus large compilation de ruptures historiques de barrages actuellement disponible dans la littérature (17-02-2020), et a été obtenue suite à la revue de 196 références. Pour ce faire, les cas de ruptures de barrages historiques recensés ont été enregistrés dans le fichier XLSX fourni, et ce, indépendamment du domaine d’application (ex. barrage construit par l’Homme, barrage à rétention minier, barrage temporaire, barrage naturel, etc.), du type d’ouvrage (ex. barrage en béton, barrage en remblai, etc.), du mode de rupture (ex. rupture par effet de Renard, rupture par submersion, etc.) et des propriétés des ouvrages (ex. hauteur du barrage, capacité du réservoir, etc.). Au fil du processus de compilation, un jeu de 45 variables a été obtenu afin d’enregistrer les informations (lorsque possible/disponible et pertinente) décrivant les données recensées dans la littérature (ex. descriptions du barrage, propriétés du barrage, dimensions de la brèche de rupture, etc.). De ce fait, le travail d’investigation et de compilation, ayant permis d’uniformiser et de standardiser cette collecte de données de différents types de barrages, a ainsi permis de fournir des balises facilitant la collecte de données à l’échelle internationale. Soulignons qu’en couplant la base de données aux fonctionnalités d'Excel (ex. pour Excel 2016: visualisation d'écran personnalisable, recherche individuelle de cas spécifiques, filtre de données, tableau croisé dynamique, etc.), le fichier peut également aisément être adapter aux besoins de son utilisateur (ex. domaine d’étude, type de barrage, type de rupture de barrage, etc.), ouvrant ainsi la porte à de nouvelles études dans divers domaines de recherche (ex. domaine de l’hydrologie, l’hydraulique et de la sécurité des barrages), grâce aux données nouvellement compilées. De ce fait, cette méthode de partage, mise gratuitement à la disposition de la communauté internationale par l’entremise de cette page web, représente donc non seulement un atout considérable pour un large public (ex. chercheurs, propriétaires de barrages, etc.), mais permet au domaine de la sécurité des barrages d’entrer dans l'actuelle ère du « Big Data ». Des versions mises à jour seront par le fait même déposées (via ce DOI) à des fréquences indéterminées afin de mettre à jour les données enregistrées au fil des ans.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
These data supplement the article Schomaker, J., Walper, D., Wittmann, B.C., & Einhäuser, W. (2017). Attention in natural scenes: Affective-motivational factors guide gaze independently of visual salience. Vision Research, 133, 161-175.
Use is free for academic purposes, provided the aforementioned article is appropriately cited.
The directory contains the following files
stimuli.tar.gz - stimuli used in this study; note that this is based on the MONS database, but some deviations from the final version of the database do exist.
ratings.mat contains the variables
arousal - mean arousal rating
valence - mean valence rating
valence2 - squared mean valence rating (after subtracting midpoint)
motivationalValue - mean motivation rating
motivaionalValue2 - squared mean motivation rating (after subtracting midpoint)
All variables are 104x3, where the first dimension is the stimulus number, and the second dimension the motivation ground truth (aversive, neutral, appetitive)
Experiment 1
fixationsExperiment1.mat contains the variables fixationX, fixationY, fixationDuration, fixaitonOnset, fixationInitial, which contain for each fixation horizontal and vertical coordinate, the duration, the time of the onset relative to the trial onset and whether it is the initial fixation. All variables have dimensions 16x104x3x50, where the first dimension is the observer, the second the scene, the third the condition and the forth a counter of fixations. Whenever there are less than 50 fixations the remainder are filled with NaN.
boundingBoxesExperiment1.mat contains for each critical object the bounding box coordinates x,y of upper left corner and width and height as variables boundingBoxX, boundingBoxY, boundingBoxW, boundingBoxH respectively. Note that this is relative to the eyetracker coordinates of experiment 1 (full display 1024x768, presentation in the center) and will therefore not match the coordinates of the images in the archive or the bounding box coordinates of experiment 2. Dimensions are 104x3, the dimensions representing scene number and condition, respectively.
figure2.m uses these data to computes figure 2 of the article from these data
dataForExperiment1.Rdata contains the data frame data, which contains for each fixation the values of the predictors used in the model of table 1. This is computed from the matlab data listed above in addition to the peak values of the AWS salience in the object.
table1.R computes and prints the models for table 1
Experiment 2
fixationsExperiment2.mat contains fixation data for experiment 2. Variable names as in experiment 1. Dimensions are 18x99x3x3x50, where the first dimension is the observer, the second the image number, the third the visual condition, the third the motivational condition and the fifth the fixation count. Since only one visual condition was shown to each observer per motivational condition, there is an additional variable 'hasData', which is 1 if the image was presented to the observer in this condition and 0 otherwise. Since fixations can be outside the image and will therefore be excluded, there is also an additional variable fixationNumber to keep a correct count of the fixation number in the trial.
boundingBoxesExperiment2.mat contains bounding box data for experiment 2 in image (and fixation) coordinates. Notation as for experiment 1, but coordinates refer to image and eyetracking coordinates used for experiment 2 and therefore can differ occasionally.
figure3and4.m generates figures 3 and 4 of the article from these data files.
dataForExperiment2.Rdata contains the data frame data, which contains for each fixation the values of the predictors used in the model of tables 2 amd 3. This is computed from the matlab data listed above in addition to the peak values of the AWS salience in the object. The fields imgMot and imgVis contain the motivational ground truth and the salience manipulation, respectively.
table2.R uses the Rdata file to compute the models for table 2 of the article and print summary results
table3.R uses the Rdata file to compute the models for table 3 of the article and print summary results. Note that the computation can take substantial time; results might deviate slightly depending on the exact version of R and its libraries used.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Dimensions is the largest database of research insight in the world. It represents the most comprehensive collection of linked data related to the global research and innovation ecosystem available in a single platform. Because Dimensions maps the entire research lifecycle, you can follow academic and industry research from early stage funding, through to output and on to social and economic impact. Businesses, governments, universities, investors, funders and researchers around the world use Dimensions to inform their research strategy and make evidence-based decisions on the R&D and innovation landscape. With Dimensions on Google BigQuery, you can seamlessly combine Dimensions data with your own private and external datasets; integrate with Business Intelligence and data visualization tools; and analyze billions of data points in seconds to create the actionable insights your organization needs. Examples of usage: Competitive intelligence Horizon-scanning & emerging trends Innovation landscape mapping Academic & industry partnerships and collaboration networks Key Opinion Leader (KOL) identification Recruitment & talent Performance & benchmarking Tracking funding dollar flows and citation patterns Literature gap analysis Marketing and communication strategy Social and economic impact of research About the data: Dimensions is updated daily and constantly growing. It contains over 112m linked research publications, 1.3bn+ citations, 5.6m+ grants worth $1.7trillion+ in funding, 41m+ patents, 600k+ clinical trials, 100k+ organizations, 65m+ disambiguated researchers and more. The data is normalized, linked, and ready for analysis. Dimensions is available as a subscription offering. For more information, please visit www.dimensions.ai/bigquery and a member of our team will be in touch shortly. If you would like to try our data for free, please select "try sample" to see our openly available Covid-19 data.Learn more