Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This work aimed to transform raw data in high quality and well organized data for research studies addressing genetics and neurodevelopmental disorders. Information and relations between patients, cnvs, genes, GO terms, and diagnoses where passed through a very demanding quality check analysis before being inserted in the relational database in order to eliminate redundancies and enhance uniformity whenever possible. By using this data, researchers can start their work one step further by querying and identifying data suitable for analysis rather than spent time in tasks related to data cleaning and data pre-processing.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The purpose of this systematic review was to explore the relationship of non-cognitive factors to academic and clinical performance in rehabilitation science programs. A search of 7 databases was conducted using the following eligibility criteria: graduate programs in physical therapy (PT), occupational therapy, speech-language pathology, United States-based programs, measurement of at least 1 non-cognitive factor, measurement of academic and/or clinical performance, and quantitative reporting of results. Articles were screened by title, abstract, and full text, and data were extracted.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This is the data for the paper "Using distant supervision to augment manually annotated data for relation extraction"Significant progress has been made in applying deep learning on natural language processing tasks recently. However, deep learning models typically require a large amount of annotated training data while often only small labeled datasets are available for many natural language processing tasks in biomedical literature. Building large-size datasets for deep learning is expensive since it involves considerable human effort and usually requires domain expertise in specialized fields. In this work, we consider augmenting manually annotated data with large amounts of data using distant supervision. However, data obtained by distant supervision is often noisy, we first apply some heuristics to remove some of the incorrect annotations. Then using methods inspired from transfer learning, we show that the resulting models outperform models trained on the original manually annotated sets.
Facebook
TwitterThis database is part of the article “From qualitative data to correlation using deep generative networks: Demonstrating the relation of nuclear position with the arrangement of actin filaments,” published in PLoS One in 2022 (DOI: 10.1371/journal.pone.0271056). The database has been shared as Creative Commons for research purposes only. Under this license, all database uses, including full or partial uses, modifications, or adaptations, must clearly and adequately cite the article above in full. No commercial use of the database or any images contained in it is allowed. For more information, contact Prof. Javier G. Fernandez at javier.fernandez@sutd.edu.sg
Facebook
TwitterThe VBRC provides bioinformatics resources to support scientific research directed at viruses belonging to the Arenaviridae, Bunyaviridae, Filoviridae, Flaviviridae, Paramyxoviridae, Poxviridae, and Togaviridae families. The Center consists of a relational database and web application that support the data storage, annotation, analysis, and information exchange goals of this work. Each data release contains the complete genomic sequences for all viral pathogens and related strains that are available for species in the above-named families. In addition to sequence data, the VBRC provides a curation for each virus species, resulting in a searchable, comprehensive mini-review of gene function relating genotype to biological phenotype, with special emphasis on pathogenesis.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The archive includes copies, compilation code, documentation and temporary data files for the BioDeepTime database.
Deposited files:
Relational database in SQLite format: biodeeptime_sqlite.zip.
Denormalized database in zipped .csv format: biodeeptime_csv.zip
Denormalized database in zipped .parquet (v1.0) format: biodeeptime_parquet.zip.
Denormalized database in .rds (R version 4.0) format: biodeeptime.rds.
Description of tables and columns: biodeeptime.md.
Database schema: schema.pdf.
Synonymy of sources: Synonymy of sources.xlsx.
Change log and known issues: NEWS.md
Compilation files: bdt_compilation.zip
References in .csv format: references.csv
References in .rds format: references.rds
Reference bibtex entries: references.bib
Bchron ages calculated for Neotoma: neotoma_bchron.rds
This repository accompanies the study BioDeepTime: a database of biodiversity time series for modern and fossil assemblages by Smith et al. (In Press).
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This is a replication data for the study "Relationship between hemodynamic parameters and severity of ischemia-induced left ventricular wall thickening during cardiopulmonary resuscitation of consistent quality". Briefly, it contains hemodynamic and echocardiographic data obtained during cardiopulmonary resuscitation in pigs. After 14 minutes of untreated ventricular fibrillation, simulated basic life support, followed by advanced cardiovascular support, was provided. During cardiopulmonary resuscitation, hemodynamic data including arterial pressure and end-tidal carbon dioxide and echocardiographic data including left ventricular wall thickness and end-diastolic volume were monitored and obtained.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Data associated with article by Skorska et al. by the same title
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
We investigate the extent to which advances in the health and life sciences (HLS) are dependent on research in the engineering and physical sciences (EPS), particularly physics, chemistry, mathematics, and engineering. The analysis combines two different bibliometric approaches. The first approach to analyze the ‘EPS-HLS interface’ is based on term map visualizations of HLS research fields. We consider 16 clinical fields and five life science fields. On the basis of expert judgment, EPS research in these fields is studied by identifying EPS-related terms in the term maps. In the second approach, a large-scale citation-based network analysis is applied to publications from all fields of science. We work with about 22,000 clusters of publications, each representing a topic in the scientific literature. Citation relations are used to identify topics at the EPS-HLS interface. The two approaches complement each other. The advantages of working with textual data compensate for the limitations of working with citation relations and the other way around. An important advantage of working with textual data is in the in-depth qualitative insights it provides. Working with citation relations, on the other hand, yields many relevant quantitative statistics. We find that EPS research contributes to HLS developments mainly in the following five ways: new materials and their properties; chemical methods for analysis and molecular synthesis; imaging of parts of the body as well as of biomaterial surfaces; medical engineering mainly related to imaging, radiation therapy, signal processing technology, and other medical instrumentation; mathematical and statistical methods for data analysis. In our analysis, about 10% of all EPS and HLS publications are classified as being at the EPS-HLS interface. This percentage has remained more or less constant during the past decade.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Uncorrelated data (A) and slightly noisy data following a clear nonmonotonic relationship (B) show poor CCs in all cases. A nonlinear but monotonic relationship (C) is captured by the Spearman CC but yields low Pearson CC. A linear relationship is characterized by high Pearson CC (D, E), but only a good agreement between the two data series (E) yields a high concordance CC.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data associated with "Long-term temporal trends in gastrointestinal parasite infection in wild Soay sheep", published in the journal Parasitology. Data consist of samples collected from individuals in the Augusts of 1988-2018 and the prevalence and abundance of different parasites: fec (strongyles), foc (coccidia), nematodirus, trichuris, capillaria, and moniezia. The suffic "-prev" indicates that this is a variable indicating the presence or absence of a given parasite. The "anthelmintic" variable is a binary variable stating whether or not an animal had been treated with an anthelmintic in the 12 months prior to sample collection.
SOAY SHEEP PROJECT DATA REUSE: The attached file(s) contain data derived from the long term field project monitoring individual Soay sheep on St Kilda and their environment. This is a request to please let us know if you use them. Several people have spent the best part of their careers collecting the data. If you plan to analyse the data, there are a number of reasons why it would be very helpful if you could contact Dan Nussey (dan.nussey@ed.ac.uk) before doing so.
[NB. If you are interested in analysing the detailed project data in any depth you may find it helpful to have our full relational database rather than the file(s) available here. If so, then we have a simple process for bringing you onto the project as a collaborator.]
1) The data can be subject to change due to updates in the pedigree, merging of records, occasional errors and so on. 2) The data are complex and workers who do not know the study system may benefit from advice when interpreting it. 3) At any one time a number of people within the existing project collaboration are analysing data from this project. Someone else may already be conducting the analysis you have in mind and it is desirable to prevent duplication of effort. 4) In order to maintain funding for the project(s), every few years we have to write proposals for original analyses to funding agencies. It is therefore very helpful for those running the project to know what data analyses are in progress. 5) Individual identifiers may vary relative to other data archives from papers using the individual-level data.
Facebook
TwitterAttribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
Database TSCEvolTree_Aze&2011_GTS2020 is database TSCEvolTree_Aze&2011_CorrJul2018, of anudc:5528 (which see), with stratigraphic ranges now calibrated to timescale GTS2020.
Calibration to GTS2020 employed planktonic foraminifer datums for Neogene of Raffi & others (2020) and for remaining Cenozoic of TimeScale Creator 8.0 (Ogg & others, 2021) after Gradstein & others (2020).
References:
Fordham, B. G., Aze, T., Haller, C., Zehady, A. K., Pearson, P. N., Ogg, J. G., & Wade, B. S. 2018. Future-proofing the Cenozoic macroperforate planktonic foraminifera phylogeny of Aze & others (2011). PLoS ONE 13(10): e0204625.
Gradstein, F. M., Ogg, J. G., Schmitz, M. D., & Ogg, G. M. (Ed.) 2020. A Geologic Time Scale 2020. Elsevier, Amsterdam. 1357 pp.
Ogg, J. G., Ogg, G. M., Gradstein, F. M., Lugowski, A., Ault, A., Zehady, A. K., Chunduru, N. V., Gangi, P., & Ogg, N. 2021. Time Scale Creator. Java software package (Version 8.0). Geologic TimeScale Foundation Inc. https://timescalecreator.org
Raffi, I., Wade, B. S., & Pälike, H. 2020. The Neogene Period. In: Gradstein, F. M., Ogg, J. G., Schmitz, M. D., & Ogg, G. M., Geologic Time Scale 2020. Elsevier, Amsterdam: 1141–1215.
Facebook
TwitterThe United Nations Convention on Biological Diversity (CBD) formally recognized the sovereign rights of nations over their biological diversity. Implicit within the treaty is the idea that mega-biodiverse countries will provide genetic resources and grant access to them and scientists in high-income countries will use these resources and share back benefits. However, little research has been conducted on how this framework is reflected in real-life scientific practice. Currently, parties to the CBD are debating whether digital sequence information (DSI) should be regulated under a new benefit-sharing framework. At this critical time point in the upcoming international negotiations, we test the fundamental hypothesis of provision and use by looking at the global patterns of access and use in scientific publications. Our data reject the provider-user relationship and suggest far more complex information flow for digital sequence information. Therefore, any new policy decisions on digital sequence information should be aware of the high level of use of DSI across low- and middle-income countries and seek to preserve open access to this crucial common good.
Facebook
TwitterThe GlycoSuite database (GlycoSuiteDB) is an annotated and curated relational database of glycan structures and is a product of Tyrian Diagnostics Ltd (formerly Proteome Systems Ltd). Currently, the database contains most published O-linked glycans, and N-linked glycans in the literature from the years 1990-2005. For each structure, information is available concerning the glycan type, linkage and anomeric configuration, mass and composition. Detailed information is provided on native and recombinant sources, including tissue and/or cell type, cell line, strain and disease state. Where known, the proteins to which the glycan structures are attached are described, and cross-references to Swiss-Prot/TrEMBL are given if applicable. The database annotations include literature references which are linked to PubMed, and detailed information on the methods used to determine each glycan structure are noted to assess the quality of the structural assignment.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The SBT KEMTLE practice test and questionnaire were administered to 569 candidate students (examinees) at the same sitting on September 12, 2015 in Daejon, Korea. A smart device (a 10-inch tablet PC) was distributed to each examinee, and they marked their responses on the screen of the device. The test items consisted of 50 multimedia items and 80 text items. They were given 120 minutes to complete the examination. All items contained 5 options with 1 best answer. All 569 examinees who were present took the examination; and 560 students responded to the questionnaire on the acceptability of SBT after the examination. The original questionnaires consisted of 8 items regarding individual characteristics, as well as 2 satisfaction, 13 convenience, and 16 preference items (Supplements 1, 2), but based on the results of exploratory factor analysis, 9 convenience and 9 preference items were selected for this study. Items were scored on a 5-point Likert scales (1, strongly disagree; 2, disagree; 3, neutral; 4, agree; 5, strongly disagree).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The provided ZIP archive contains an XML file "main-database-description.xml" with the description of all tables (VIEWS) that are exposed publicly at the PLBD server (https://plbd.org/). In the XML file, all columns of the visible tables are described, specifying their SQL types, measurement units, semantics, calculation formulae, SQL statements that can be used to generate values in these columns, and publications of the formulae derivations.
The XML file conforms to the published XSD schema created for descriptions of relational databases for specifications of scientific measurement data. The XSD schema ("relational-database_v2.0.0-rc.18.xsd") and all included sub-schemas are provided in the same archive for convenience. All XSD schemas are validated against the "XMLSchema.xsd" schema from the W3C consortium.
The ZIP file contains the excerpt from the files hosted in the https://plbd.org/ at the moment of submission of the PLBD database in the Scientific Data journal, and is provided to conform the journal policies. The current data and schemas should be fetched from the published URIs:
https://plbd.org/
https://plbd.org/doc/db/schemas
https://plbd.org/doc/xml/schemas
Software that is used to generate SQL schemas, RestfulDB metadata and the RestfulDB middleware that allows to publish the databases generated from the XML description on the Web are available at public Subversion repositories:
svn://www.crystallography.net/solsa-database-scripts
svn://saulius-grazulis.lt/restfuldb
The unpacked ZIP file will create the "db/" directory with the tree layout given below. In addition to the database description file "main-database-description.xml", all XSD schemas necessary for validation of the XML file are provided. On a GNU/Linux operating system with a GNU Make package installed, the XML file validity can be checked by unpacking the ZIP file, entering the unpacked directory, and running 'make distclean; make'. For example, on a Linux Mint distribution, the following commands should work:
unzip main-database-description.zip
cd db/release/v0.10.0/tables/
sh -x dependencies/Linuxmint-20.1/install.sh
make distclean
make
If necessary, additional packages can be installed using the 'install.sh' script in the 'dependencies/' subdirectory corresponding to your operating system. As of the moment of writing, Debian-10 and Linuxmint-20.1 OSes are supported out of the box; similar OSes might work with the same 'install.sh' scripts. The installation scripts require to run package installation command under system administrator privileges, but they use only the standard system package manager, thus they should not put your system at risk. For validation and syntax checking, the 'rxp' and 'xmllint' programs are used.
The log files provided in the "outputs/validation" subdirectory contain validation logs obtained on the system where the XML files were last checked and should indicate validity of the provided XML file against the references schemas.
db/
└── release
└── v0.10.0
└── tables
├── Makeconfig-validate-xml
├── Makefile
├── Makelocal-validate-xml
├── dependencies
├── main-database-description.xml
├── outputs
└── schema
Facebook
TwitterThis data set was used in the analysis of the paper "SATISFACTION WITH DOCTOR-PATIENT RELATIONSHIP IN THE HEALTHCARE SERVICE USER EXPERIENCE"
Facebook
Twitterhttps://doi.org/10.17026/fp39-0x58https://doi.org/10.17026/fp39-0x58
Individual cow data including breed, milk yield, age at first calving, calving interval. Date Submitted: 2021-04-30
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Evaluation of STENCIL load times.
Facebook
TwitterDataset of paper: The title of 'Correlation Between Individual Thigh Muscle Volume and Grip Strength In Relation to Sarcopenia with Automated Muscle Segmentation'
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This work aimed to transform raw data in high quality and well organized data for research studies addressing genetics and neurodevelopmental disorders. Information and relations between patients, cnvs, genes, GO terms, and diagnoses where passed through a very demanding quality check analysis before being inserted in the relational database in order to eliminate redundancies and enhance uniformity whenever possible. By using this data, researchers can start their work one step further by querying and identifying data suitable for analysis rather than spent time in tasks related to data cleaning and data pre-processing.