Facebook
TwitterAs of June 2024, the most popular database management system (DBMS) worldwide was Oracle, with a ranking score of *******; MySQL and Microsoft SQL server rounded out the top three. Although the database management industry contains some of the largest companies in the tech industry, such as Microsoft, Oracle and IBM, a number of free and open-source DBMSs such as PostgreSQL and MariaDB remain competitive. Database Management Systems As the name implies, DBMSs provide a platform through which developers can organize, update, and control large databases. Given the business world’s growing focus on big data and data analytics, knowledge of SQL programming languages has become an important asset for software developers around the world, and database management skills are seen as highly desirable. In addition to providing developers with the tools needed to operate databases, DBMS are also integral to the way that consumers access information through applications, which further illustrates the importance of the software.
Facebook
TwitterAs of June 2024, the most popular relational database management system (RDBMS) worldwide was Oracle, with a ranking score of *******. Oracle was also the most popular DBMS overall. MySQL and Microsoft SQL server rounded out the top three.
Facebook
TwitterIn 2023, over ** percent of surveyed software developers worldwide reported using PostgreSQL, the highest share of any database technology. Other popular database tools among developers included MySQL and SQLite.
Facebook
TwitterAs of June 2024, the most popular commercial database management system (DBMS) in the world was Oracle, with a ranking score of ****. MySQL was the most popular open source DBMS at that time, with a ranking score of ****.
Facebook
TwitterApproximately ** percent of the surveyed software companies in Russia mentioned PostgreSQL, making it the most popular database management system (DBMS) in the period between February and May 2022. MS SQL and MySQL followed, having been mentioned by ** percent and ** percent of respondents, respectively.
Facebook
TwitterAttribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
License information was derived automatically
Citation metrics are widely used and misused. We have created a publicly available database of top-cited scientists that provides standardized information on citations, h-index, co-authorship adjusted hm-index, citations to papers in different authorship positions and a composite indicator (c-score). Separate data are shown for career-long and, separately, for single recent year impact. Metrics with and without self-citations and ratio of citations to citing papers are given and data on retracted papers (based on Retraction Watch database) as well as citations to/from retracted papers have been added in the most recent iteration. Scientists are classified into 22 scientific fields and 174 sub-fields according to the standard Science-Metrix classification. Field- and subfield-specific percentiles are also provided for all scientists with at least 5 papers. Career-long data are updated to end-of-2023 and single recent year data pertain to citations received during calendar year 2023. The selection is based on the top 100,000 scientists by c-score (with and without self-citations) or a percentile rank of 2% or above in the sub-field. This version (7) is based on the August 1, 2024 snapshot from Scopus, updated to end of citation year 2023. This work uses Scopus data. Calculations were performed using all Scopus author profiles as of August 1, 2024. If an author is not on the list it is simply because the composite indicator value was not high enough to appear on the list. It does not mean that the author does not do good work. PLEASE ALSO NOTE THAT THE DATABASE HAS BEEN PUBLISHED IN AN ARCHIVAL FORM AND WILL NOT BE CHANGED. The published version reflects Scopus author profiles at the time of calculation. We thus advise authors to ensure that their Scopus profiles are accurate. REQUESTS FOR CORRECIONS OF THE SCOPUS DATA (INCLUDING CORRECTIONS IN AFFILIATIONS) SHOULD NOT BE SENT TO US. They should be sent directly to Scopus, preferably by use of the Scopus to ORCID feedback wizard (https://orcid.scopusfeedback.com/) so that the correct data can be used in any future annual updates of the citation indicator databases. The c-score focuses on impact (citations) rather than productivity (number of publications) and it also incorporates information on co-authorship and author positions (single, first, last author). If you have additional questions, see attached file on FREQUENTLY ASKED QUESTIONS. Finally, we alert users that all citation metrics have limitations and their use should be tempered and judicious. For more reading, we refer to the Leiden manifesto: https://www.nature.com/articles/520429a
Facebook
TwitterThe ARS Water Data Base is a collection of precipitation and streamflow data from small agricultural watersheds in the United States. This national archive of variable time-series readings for precipitation and runoff contains sufficient detail to reconstruct storm hydrographs and hyetographs. There are currently about 14,000 station years of data stored in the data base. Watersheds used as study areas range from 0.2 hectare (0.5 acres) to 12,400 square kilometers (4,786 square miles). Raingage networks range from one station per watershed to over 200 stations. The period of record for individual watersheds vary from 1 to 50 years. Some watersheds have been in continuous operation since the mid 1930's. Resources in this dataset:Resource Title: FORMAT INFORMATION FOR VARIOUS RECORD TYPES. File Name: format.txtResource Description: Format information identifying fields and their length will be included in this file for all files except those ending with the extension .txt TYPES OF FILES As indicated in the previous section data has been stored by location number in the form, LXX where XX is the location number. In each subdirectory, there will be various files using the following naming conventions: Runoff data: WSXXX.zip where XXX is the watershed number assigned by the WDC. This number may or may not correspond to a naming convention used in common literature. Rainfall data: RGXXXXXX.zip where XXXXXX is the rain gage station identification. Maximum-minimum daily air temperature: MMTXXXXX.zip where XXXXX is the watershed number assigned by the WDC. Ancillary text files: NOTXXXXX.txt where XXXXX is the watershed number assigned by the WDC. These files will contain textual information including latitude-longitude, name commonly used in literature, acreage, most commonly-associated rain gage(s) (if known by the WDC), a list of all rain gages on or near the watershed. Land use, topography, and soils as known by the WDC. Topographic maps of the watersheds: MAPXXXXX.zip where XXXXX is the location/watershed number assigned by the WDC. Map files are binary TIF files. NOT ALL FILE TYPES MAY BE AVAILABLE FOR SPECIFIC WATERSHEDS. Data files are still being compiled and translated into a form viable for this archive. Please bear with us while we grow.Resource Title: Data Inventory - watersheds. File Name: inventor.txtResource Description: Watersheds at which records of runoff were being collected by the Agricultural Research Service. Variables: Study Location & Number of Rain Gages1; Name; Lat.; Long; Number; Pub. Code; Record Began; Land Use2; Area (Acres); Types of Data3Resource Title: Information about the ARS Water Database. File Name: README.txtResource Title: INDEX TO INFORMATION ON EXPERIMENTAL AGRICULTURAL WATERSHEDS. File Name: INDEX.TXTResource Description: This report includes identification information on all watersheds operated by the ARS. Only some of these are included in the ARS Water Data Base. They are so indicated in the column titled ARS Water Data Base. Other watersheds will not have data available here or through the Water Data Center. This index is particularly important since it relates watershed names with the indexing system used by the Water Data Center. Each location has been assigned a number. The data for that location will be stored in a sub-directory coded as LXX where XX is the location number. The index also indicates the watershed number used by the WDC. Data for a particular watershed will be stored in a compressed file named WSXXXXX.zip where XXXXX is the watershed number assigned by the WDC. Although not included in the index, rain gage information will be stored in compressed files named RGXXXXXX.zip where XXXXXX is a 6-character identification of the rain gage station. The Index also provides information such as latitude-longitude for each of the watersheds, acreage, the period-of-record for each acreage. Multiple entries for a particular watershed will either indicate that the acreage designated for the watershed changed or there was a break in operations of the watershed. Resource Title: ARS Water Database files. File Name: ars_water.zipResource Description: USING THIS SYSTEM Before downloading huge amounts of data from the ARS Water Data Base, you should first review the text files included in this directory. They include: INDEX OF ARS EXPERIMENTAL WATERSHEDS: index.txt This report includes identification information on all watersheds operated by the ARS. Only some of these are included in the ARS Water Data Base. They are so indicated in the column titled ARS Water Data Base. Other watersheds will not have data available here or through the Water Data Center. This index is particularly important since it relates watershed names with the indexing system used by the Water Data Center. Each location has been assigned a number. The data for that location will be stored in a sub-directory coded as LXX where XX is the location number. The index also indicates the watershed number used by the WDC. Data for a particular watershed will be stored in a compressed file named WSXXXXX.zip where XXXXX is the watershed number assigned by the WDC. Although not included in the index, rain gage information will be stored in compressed files named RGXXXXXX.zip where XXXXXX is a 6-character identification of the rain gage station. The Index also provides information such as latitude-longitude for each of the watersheds, acreage, the period-of-record for each acreage. Multiple entries for a particular watershed will either indicate that the acreage designated for the watershed changed or there was a break in operations of the watershed. STATION TABLE FOR THE ARS WATER DATA BASE: station.txt This report indicates the period of record for each recording station represented in the ARS Water Data Base. The data for a particular station will be stored in a single compressed file. FORMAT INFORMATION FOR VARIOUS RECORD TYPES: format.txt Format information identifying fields and their length will be included in this file for all files except those ending with the extension .txt TYPES OF FILES As indicated in the previous section data has been stored by location number in the form, LXX where XX is the location number. In each subdirectory, there will be various files using the following naming conventions: Runoff data: WSXXX.zip where XXX is the watershed number assigned by the WDC. This number may or may not correspond to a naming convention used in common literature. Rainfall data: RGXXXXXX.zip where XXXXXX is the rain gage station identification. Maximum-minimum daily air temperature: MMTXXXXX.zip where XXXXX is the watershed number assigned by the WDC. Ancillary text files: NOTXXXXX.txt where XXXXX is the watershed number assigned by the WDC. These files will contain textual information including latitude-longitude, name commonly used in literature, acreage, most commonly-associated rain gage(s) (if known by the WDC), a list of all rain gages on or near the watershed. Land use, topography, and soils as known by the WDC. Topographic maps of the watersheds: MAPXXXXX.zip where XXXXX is the location/watershed number assigned by the WDC. Map files are binary TIF files. NOT ALL FILE TYPES MAY BE AVAILABLE FOR SPECIFIC WATERSHEDS. Data files are still being compiled and translated into a form viable for this archive. Please bear with us while we grow.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset includes bibliographic information for 501 papers that were published from 2010-April 2017 (time of search) and use online biodiversity databases for research purposes. Our overarching goal in this study is to determine how research uses of biodiversity data developed during a time of unprecedented growth of online data resources. We also determine uses with the highest number of citations, how online occurrence data are linked to other data types, and if/how data quality is addressed. Specifically, we address the following questions:
1.) What primary biodiversity databases have been cited in published research, and which
databases have been cited most often?
2.) Is the biodiversity research community citing databases appropriately, and are
the cited databases currently accessible online?
3.) What are the most common uses, general taxa addressed, and data linkages, and how
have they changed over time?
4.) What uses have the highest impact, as measured through the mean number of citations
per year?
5.) Are certain uses applied more often for plants/invertebrates/vertebrates?
6.) Are links to specific data types associated more often with particular uses?
7.) How often are major data quality issues addressed?
8.) What data quality issues tend to be addressed for the top uses?
Relevant papers for this analysis include those that use online and openly accessible primary occurrence records, or those that add data to an online database. Google Scholar (GS) provides full-text indexing, which was important to identify data sources that often appear buried in the methods section of a paper. Our search was therefore restricted to GS. All authors discussed and agreed upon representative search terms, which were relatively broad to capture a variety of databases hosting primary occurrence records. The terms included: “species occurrence” database (8,800 results), “natural history collection” database (634 results), herbarium database (16,500 results), “biodiversity database” (3,350 results), “primary biodiversity data” database (483 results), “museum collection” database (4,480 results), “digital accessible information” database (10 results), and “digital accessible knowledge” database (52 results)--note that quotations are used as part of the search terms where specific phrases are needed in whole. We downloaded all records returned by each search (or the first 500 if there were more) into a Zotero reference management database. About one third of the 2500 papers in the final dataset were relevant. Three of the authors with specialized knowledge of the field characterized relevant papers using a standardized tagging protocol based on a series of key topics of interest. We developed a list of potential tags and descriptions for each topic, including: database(s) used, database accessibility, scale of study, region of study, taxa addressed, research use of data, other data types linked to species occurrence data, data quality issues addressed, authors, institutions, and funding sources. Each tagged paper was thoroughly checked by a second tagger.
The final dataset of tagged papers allow us to quantify general areas of research made possible by the expansion of online species occurrence databases, and trends over time. Analyses of this data will be published in a separate quantitative review.
Facebook
TwitterUser-contributed list of biological databases available on the internet. Currently there are 1,801 entries, each describing a different database. The databases are described in a semi-structured way by using templates and entries can carry various user comments and annotations. Entries can be searched, listed or browsed by category. The site uses the same MediaWiki technology that powers Wikipedia, The Mediawiki system allows users to participate on many different levels, ranging from authors and editors to curators and designers. MetaBase aims to be a flexible, user-driven (user-created) resource for the biological database community. The main focuses of MetaBase are: * As a basic requirement, MB contains a list of databases, URLs and descriptions of the most commonly used biological databases currently available on the internet. * The system should be flexible, allowing users to contribute, update and maintain the data in different ways. * In the future we aim to generate more communication between the database developer and user communities.
Facebook
TwitterUser-contributed list of biological databases available on the internet. Currently there are 1,801 entries, each describing a different database. The databases are described in a semi-structured way by using templates and entries can carry various user comments and annotations. Entries can be searched, listed or browsed by category. The site uses the same MediaWiki technology that powers Wikipedia, The Mediawiki system allows users to participate on many different levels, ranging from authors and editors to curators and designers. MetaBase aims to be a flexible, user-driven (user-created) resource for the biological database community. The main focuses of MetaBase are: * As a basic requirement, MB contains a list of databases, URLs and descriptions of the most commonly used biological databases currently available on the internet. * The system should be flexible, allowing users to contribute, update and maintain the data in different ways. * In the future we aim to generate more communication between the database developer and user communities.
Facebook
TwitterOrganisms living in honey bees and honey bee colonies form large associative holobiont communities that are integral to bee biology. High-throughput sequencing approaches to characterize these holobiont communities from honey bees in various states of health and disease are now commonplace, producing large amounts of nucleotide sequence data that must be accurately and consistently analyzed in order to produce reliable and comparable reports. In addition, new species designations and revisions are actively being made from honey bee holobiont communities, complicating nomenclature in larger databases where taxonomic descriptions associated with archived sequences can quickly become outdated and misleading. To improve the accuracy and consistency of honey bee holobiont research, we have developed HoloBee: a curated database of publicly accessioned nucleotide sequences from the honey bee holobiont community. Except in rare and noted exceptions made by curators, sequences used in HoloBee were obtained from, or in association with, Apis mellifera (Western honey bee) as well as other honey bee species where available (e.g. Apis cerana, Apis dorsata, Apis laboriosa, Apis koschevnikovi, Apis florea, Apis andreniformis and Apis nigrocincta). Sources include: within or on the surface of honey bees (adult, pupae, larvae, egg), corbicular pollen, bee bread, royal jelly, honey, comb, hive surfaces (e.g. bottom board debris, frames, landing platforms), and isolates of microbes, parasites and pathogens from honey bees. HoloBee contains two non-overlapping sets of sequence data, HoloBee-Barcode and HoloBee-Mop, each of which have distinct intended uses. HoloBee-Barcode is a non-redundant database of taxonomically informative barcoding loci for all viruses, bacteria, fungi, protozoans and metazoans associated with honey bees (Apis spp.). It was created from an exhaustive master sequence archive of all valid holobiont sequences. Redundancy was removed from this master archive using a clustering algorithm that grouped sequences with ≥ 99% identity and retained the longest sequence from each cluster as the representative accession for that sequence type (“centroid”). These centroid sequences were concatenated into a fasta formatted file to create the HoloBee-Barcode database. Associated taxonomy for each centroid, including Superkingdom through Species and Strain/Isolate, was individually reviewed and corrected when necessary by a curator. Cross reference tables (separated according to 5 major taxonomic groups) provide a user-friendly outline of information for each centroid accession within HoloBee-Barcode including taxonomy, gene/product name, sequence length, the unaltered NCBI definition line, the number and identity of redundant sequences clustered within each centroid, and any additional information provided by the curator. HoloBee-Barcode centroid counts are: Viruses = 86; Bacteria = 496; Fungi = 41; Protozoa = 4; Metazoa = 60. HoloBee-Barcode is intended to improve and standardize quantitative and qualitative metagenomic descriptions of holobiont communities associated with honey bees by providing a curated set of barcode sequences. The goal of genetic barcoding is to associate a nucleotide sequence sample to a taxonomically valid species. Genomic regions targeted for such barcoding purposes varied by taxonomic group. The small subunit (SSU) ribosomal RNA, or 16S rRNA, is the most commonly used barcode for bacteria and is used in HB-Barcode. These 16S rRNA sequences will support the analysis of data generated with the widely used approach of amplicon-based 16S rRNA deep sequencing to study microbiota communities. Although barcode markers for fungi are less definitive than bacteria, HB-Barcode defaults to the ribosomal RNA internal transcribed spacer region (ITS), which typically includes ITS-1, 5.8S, and ITS-2. For some clades that cannot be resolved by this region, other barcode markers were selected. The majority of barcodes for metazoan taxa are the mitochondrial locus cytochrome c oxidase subunit I (COI). Complete mitochondrial DNA (mtDNA) sequence for Apis cerana (Asian honey bee) and Galleria mellonella (Greater wax moth) are included as barcodes for these species. We note that A. cerana mtDNA is included because it is considered a potentially invasive honey bee species and monitoring for its occurrence is in practice regionally, including in Australia, New Zealand and the USA. Protozoan barcodes include cytochrome b oxidase (Cytb), SSU, or ITS while entire genomes are used for viral barcoding. HoloBee-Mop is a database comprised mostly of chromosomal, mitochondrial and plasmid genome assemblies in order to aggregate as much honey bee holobiont genomic sequence information as possible. For a few organisms without genome assembly data, transcriptome data are included (e.g. Aethina tumida, small hive beetle). Unlike HoloBee-Barcode, redundancy removal was not performed on the HoloBee-Mop database and thus this resource provides an archive of nucleotide sequence assemblies from honey bee holobionts. However, since full viral genomes are used in HoloBee-Barcode, only redundant viral sequences occur in HoloBee-Mop. All accessions within each of these assemblies were concatenated into a single fasta formatted file to create the HoloBee-Mop database. The intended purpose of HoloBee-Mop is to improve honey bee genome and transcriptome assemblies by “mopping-up” as much viral, bacterial, fungal, protozoan and non-honey bee metazoan sequence data as possible. Therefore, sequence data remaining after processing reads through both HoloBee-Barcode and HoloBee-Mop that do not map to the honey bee genome may contain unique data from taxonomic variants or novel species. Details for each sequence assembly within HoloBee-Mop are tabulated in cross reference tables according to each major taxonomic group. HoloBee-Mop assembly counts are: Viruses = 2; Bacteria = 55; Fungi = 5; Protozoa = 1; Metazoa = 6. Follow the HoloBee database on Twitter at: https://twitter.com/HoloBee_db For questions about the HoloBee database, contact: HoloBee database team: holobee.db@gmail.com Jay Evans: Jay.Evans@ars.usda.gov Anna Childers: Anna.Childers@ars.usda.gov Resources in this dataset:Resource Title: HoloBee_v2016.1 sequence database. File Name: HB_v2016.1.zipResource Description: This compressed file contains two fasta sequence files: HB_Bar_v2016.1.fasta (HoloBee-Barcode database) HB_Mop_v2016.1.fasta (HoloBee-Mop database) md5 values: HB_v2016.1.zip: 6e372e443744282128eb51488176503f HB_Bar_v2016.1.fasta: 109e1f686a690c70ef78fc4b5066a01f HB_Mop_v2016.1.fasta: ced8c3f5987dce69e800c8c491471eba Resource Title: data dictionary for HoloBee_v2016.1. File Name: Data_Dictionary_HoloBee_v2016.1.xlsxResource Title: HoloBee_v2016.1 cross reference tables. File Name: HB_v2016.1_crossref.zipResource Description: This compressed file contains ten spreadsheet files (.xlsx) tabulating detailed information for all centroids (HoloBee-Barcode database) and sequence assemblies (HoloBee-Mop database) used in HoloBee v2016.1: HB_Bar_v2016.1_bacteria_crossref_2016-05-18.xlsx HB_Bar_v2016.1_fungi_crossref_2016-05-20.xlsx HB_Bar_v2016.1_metazoa_crossref_2016-05-16.xlsx HB_Bar_v2016.1_protozoa_crossref_2016-05-20.xlsx HB_Bar_v2016.1_viruses_crossref_2016-05-17.xlsx HB_Mop_v2016.1_bacteria_crossref_2016-05-12.xlsx HB_Mop_v2016.1_fungi_crossref_2016-05-12.xlsx HB_Mop_v2016.1_metazoa_crossref_2016-04-15.xlsx HB_Mop_v2016.1_protozoa_crossref_2016-04-11.xlsx HB_Mop_v2016.1_viruses_crossref_2016-05-12.xlsx md5 value: HB_v2016.1_crossref.zip: a8a57d92830eb77904743afc95980465 Resource Title: data dictionary for HoloBee_v2016.1. File Name: Data_Dictionary_HoloBee_v2016.1.csv
Facebook
TwitterThe majority of respondents stated that their company used more than one database for their operations. This indicates the complexity of maintaining security of IT infrastructure at organizations. Microsoft Azure database (** percent) and Microsoft SQL Server (** percent) were the most commonly used databases among respondents.
Facebook
TwitterThe European Commission funded project Stance4Health (S4H) aims to develop a complete personalised nutrition service. In order to succeed, sources of information on nutritional composition and other characteristics of foods need to be as comprehensive as possible. Food composition tables or databases (FCT/FCDB) are the most commonly used tools for this purpose. The aim of this study is to describe the harmonisation efforts carried out to obtain the Stance4Health FCDB. A total of 10 FCT/FCDB were selected from different countries and organizations. Data were classified using FoodEx2 and INFOODS tagnames to harmonise the information. Hazard analysis and critical control points analysis was applied as the quality control method. Data were processed by spreadsheets and MySQL. S4H’s FCDB is composed of 880 elements, including nutrients and bioactive compounds. A total of 2648 unified foods were used to complete the missing values of the national FCDB used. Recipes and dishes were estimated following EuroFIR standards via linked tables. S4H’s FCDB will be part of the smartphone app developed in the framework of the Stance4Health European project, which will be used in different personalized nutrition intervention studies. S4H FCDB has great perspectives, being one of the most complete in terms of number of harmonized foods, nutrients and bioactive compounds included.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
A Comprehensive List of Open Data Portals from Around the World
Open Data Commons Public Domain Dedication and License (PDDL) v1.0 DISCLAIMER Open Data Commons is not a law firm and does not provide legal services of any kind.
Open Data Commons has no formal relationship with you. Your receipt of this document does not create any kind of agent-client relationship. Please seek the advice of a suitably qualified legal professional licensed to practice in your jurisdiction before using this document.
No warranties and disclaimer of any damages.
This information is provided ‘as is‘, and this site makes no warranties on the information provided. Any damages resulting from its use are disclaimed.
Read the full disclaimer. A plain language summary of the Public Domain Dedication and License is available as well as a plain text version.
Public Domain Dedication and License (PDDL) PREAMBLE The Open Data Commons – Public Domain Dedication and Licence is a document intended to allow you to freely share, modify, and use this work for any purpose and without any restrictions. This licence is intended for use on databases or their contents (“data”), either together or individually.
Many databases are covered by copyright. Some jurisdictions, mainly in Europe, have specific special rights that cover databases called the “sui generis” database right. Both of these sets of rights, as well as other legal rights used to protect databases and data, can create uncertainty or practical difficulty for those wishing to share databases and their underlying data but retain a limited amount of rights under a “some rights reserved” approach to licensing as outlined in the Science Commons Protocol for Implementing Open Access Data. As a result, this waiver and licence tries to the fullest extent possible to eliminate or fully license any rights that cover this database and data. Any Community Norms or similar statements of use of the database or data do not form a part of this document, and do not act as a contract for access or other terms of use for the database or data.
THE POSITION OF THE RECIPIENT OF THE WORK Because this document places the database and its contents in or as close as possible within the public domain, there are no restrictions or requirements placed on the recipient by this document. Recipients may use this work commercially, use technical protection measures, combine this data or database with other databases or data, and share their changes and additions or keep them secret. It is not a requirement that recipients provide further users with a copy of this licence or attribute the original creator of the data or database as a source. The goal is to eliminate restrictions held by the original creator of the data and database on the use of it by others.
THE POSITION OF THE DEDICATOR OF THE WORK Copyright law, as with most other law under the banner of “intellectual property”, is inherently national law. This means that there exists several differences in how copyright and other IP rights can be relinquished, waived or licensed in the many legal jurisdictions of the world. This is despite much harmonisation of minimum levels of protection. The internet and other communication technologies span these many disparate legal jurisdictions and thus pose special difficulties for a document relinquishing and waiving intellectual property rights, including copyright and database rights, for use by the global community. Because of this feature of intellectual property law, this document first relinquishes the rights and waives the relevant rights and claims. It then goes on to license these same rights for jurisdictions or areas of law that may make it difficult to relinquish or waive rights or claims.
The purpose of this document is to enable rightsholders to place their work into the public domain. Unlike licences for free and open source software, free cultural works, or open content licences, rightsholders will not be able to “dual license” their work by releasing the same work under different licences. This is because they have allowed anyone to use the work in whatever way they choose. Rightsholders therefore can’t re-license it under copyright or database rights on different terms because they have nothing left to license. Doing so creates truly accessible data to build rich applications and advance the progress of science and the arts.
This document can cover either or both of the database and its contents (the data). Because databases can have a wide variety of content – not just factual data – rightsholders should use the Open Data Commons – Public Domain Dedication & Licence for an entire database and its contents only if everything can be placed under the terms of this document. Because even factual data can sometimes have intellectual property rights, rightsholders should use this licence to cover b...
Facebook
TwitterChlorophyll a is the most commonly used indicator of phytoplankton biomass and is a proxy for primary productivity in the marine environment. It is relatively simple and cost effective to measure …Show full descriptionChlorophyll a is the most commonly used indicator of phytoplankton biomass and is a proxy for primary productivity in the marine environment. It is relatively simple and cost effective to measure when compared to phytoplankton abundance and is thus routinely included in many surveys. Here we collate 173,333 records of chlorophyll a collected since 1965 from Australian waters gathered from researchers on regular coastal monitoring surveys to long ocean voyages. This dataset concentrates on samples analysed using spectrophotometry, fluorometry and high performance liquid chromatography (HPLC). Here we collate all available chlorophyll a data from Australian waters, gathered from researchers, students, government bodies, state agencies, councils and databases, along with the associated metadata. The Australian Chlorophyll a Database is available through the Australian Ocean Data Network portal (AODN: https://portal.aodn.org.au/), the main repository for marine data in Australia. The Australian Chlorophyll a Database will be maintained and updated through the CSIRO data centre, with periodic updates sent to the AODN. A snapshot of the Australian Chlorophyll a Database at the time of this publication has been assigned a DOI and will be maintained in perpetuity by the AODN. These data can be used in isolation as an index of phytoplankton biomass or in combination with other data to provide insight into water quality, ecosystem state, and / or the relationships with other trophic levels such as zooplankton or fish. This metadata record was based on the following CSIRO metadata record: https://marlin.csiro.au/geonetwork/srv/eng/search?uuid=4c72fe3b-bddf-44da-a809-1791033a6ac5.
Facebook
TwitterDatabase of bibliographic details of over 9,000 references published between 1951 and the present day, and includes abstracts, journal articles, book chapters and books replacing the two former separate websites for Ian Stolerman's drug discrimination database and Dick Meisch's drug self-administration database. Lists of standardized keywords are used to index the citations. Most of the keywords are generic drug names but they also include methodological terms, species studied and drug classes. This index makes it possible to selectively retrieve references according to the drugs used as the training stimuli, drugs used as test stimuli, drugs used as pretreatments, species, etc. by entering your own terms or by using our comprehensive lists of search terms. Drug Discrimination Drug Discrimination is widely recognized as one of the major methods for studying the behavioral and neuropharmacological effects of drugs and plays an important role in drug discovery and investigations of drug abuse. In Drug Discrimination studies, effects of drugs serve as discriminative stimuli that indicate how reinforcers (e.g. food pellets) can be obtained. For example, animals can be trained to press one of two levers to obtain food after receiving injections of a drug, and to press the other lever to obtain food after injections of the vehicle. After the discrimination has been learned, the animal starts pressing the appropriate lever according to whether it has received the training drug or vehicle; accuracy is very good in most experiments (90 or more correct). Discriminative stimulus effects of drugs are readily distinguished from the effects of food alone by collecting data in brief test sessions where responses are not differentially reinforced. Thus, trained subjects can be used to determine whether test substances are identified as like or unlike the drug used for training. Drug Self-administration Drug Self-administration methodology is central to the experimental analysis of drug abuse and dependence (addiction). It constitutes a key technique in numerous investigations of drug intake and its neurobiological basis and has even been described by some as the gold standard among methods in the area. Self-administration occurs when, after a behavioral act or chain of acts, a feedback loop results in the introduction of a drug or drugs into a human or infra-human subject. The drug is usually conceptualized as serving the role of a positive reinforcer within a framework of operant conditioning. For example, animals can be given the opportunity to press a lever to obtain an infusion of a drug through a chronically-indwelling venous catheter. If the available dose of the drug serves as a positive reinforcer then the rate of lever-pressing will increase and a sustained pattern of responding at a high rate may develop. Reinforcing effects of drugs are distinguishable from other actions such as increases in general activity by means of one or more control procedures. Trained subjects can be used to investigate the behavioral and neuropharmacological basis of drug-taking and drug-seeking behaviors and the reinstatement of these behaviors in subjects with a previous history of drug intake (relapse models). Other applications include evaluating novel compounds for liability to produce abuse and dependence and for their value in the treatment of drug dependence and addiction. The bibliography is updated about four times per year.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
fisheries management is generally based on age structure models. thus, fish ageing data are collected by experts who analyze and interpret calcified structures (scales, vertebrae, fin rays, otoliths, etc.) according to a visual process. the otolith, in the inner ear of the fish, is the most commonly used calcified structure because it is metabolically inert and historically one of the first proxies developed. it contains information throughout the whole life of the fish and provides age structure data for stock assessments of all commercial species. the traditional human reading method to determine age is very time-consuming. automated image analysis can be a low-cost alternative method, however, the first step is the transformation of routinely taken otolith images into standardized images within a database to apply machine learning techniques on the ageing data. otolith shape, resulting from the synthesis of genetic heritage and environmental effects, is a useful tool to identify stock units, therefore a database of standardized images could be used for this aim. using the routinely measured otolith data of plaice (pleuronectes platessa; linnaeus, 1758) and striped red mullet (mullus surmuletus; linnaeus, 1758) in the eastern english channel and north-east arctic cod (gadus morhua; linnaeus, 1758), a greyscale images matrix was generated from the raw images in different formats. contour detection was then applied to identify broken otoliths, the orientation of each otolith, and the number of otoliths per image. to finalize this standardization process, all images were resized and binarized. several mathematical morphology tools were developed from these new images to align and to orient the images, placing the otoliths in the same layout for each image. for this study, we used three databases from two different laboratories using three species (cod, plaice and striped red mullet). this method was approved to these three species and could be applied for others species for age determination and stock identification.
Facebook
TwitterABSTRACT Access to academic information has become one of the pillars for the student’s role in the learning process, and it is strategic to analyze the behavior and management of available information resources pertinent to the training and excellence of future professionals. The objective of this research was to analyze the behavior reported by medical students of a higher education institution with active learning methodology regarding access to academic information, as well as opinions about the construction of academic knowledge during undergraduate training. A cross-sectional and analytical observational study was conducted with 274 students from the Pernambuco Health College Medical School in Recife, Pernambuco. A specific questionnaire was prepared and validated for the data collection, and subsequently analyzed descriptively using absolute and percentage frequencies for categorical variables and measurements. To evaluate the association between two categorical variables, Pearson’s Chi-square test and Fisher’s exact test were used. The research project was approved by the ethics committee and respected all ethical requirements. Among those surveyed, 52.8% used electronic media alone, while 37.7% indicated that they handled both electronic and print media and 9.4% cited print media alone. In relation to the forms of study with which the student most identifies, the options confirmed by the majority were “online books (PDF, Word, Epub, etc.)” and “paper books” (81.9% and 68,3%, respectively). Regarding questions about the use of electronic databases in their study routine, the majority (67.9%) responded positively to the statement; the most commonly cited databases included SciELO (86.7%), and PubMed (70.6%). When evaluating access to scientific information among medical students, it was seen that, although most students used electronic databases in their academic routine, more than half had not received training related to bibliographic research techniques; most had learned with practice. Almost all the students surveyed agree on the importance of evidence-based practice in academic routine, which is reported by more than half of the students who, when they do not seek information online, feel less up-to-date.
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
The sequence database searching method is widely used in proteomics for peptide identification. To control the false discovery rate (FDR) of the searching results, the target–decoy method generates and searches a decoy database together with the target database. A known problem is that the target protein sequence database may contain numerous repeated peptides. The structures of these repeats are not preserved by most existing decoy generation algorithms. Previous studies suggest that such discrepancy between the target and decoy databases may lead to an inaccurate FDR estimation. Based on the de Bruijn graph model, we propose a new repeat-preserving algorithm to generate decoy databases. We prove that this algorithm preserves the structures of the repeats in the target database to a great extent. The de Bruijn method has been compared with a few other commonly used methods and demonstrated superior FDR estimation accuracy and an improved number of peptide identification.
Facebook
TwitterChlorophyll a is the most commonly used indicator of phytoplankton biomass and is a proxy for primary productivity in the marine environment. It is relatively simple and cost effective to measure when compared to phytoplankton abundance and is thus routinely included in many surveys. Here we collate 173,333 records of chlorophyll a collected since 1965 from Australian waters gathered from researchers, from regular coastal monitoring surveys to long ocean voyages. This dataset concentrates on samples analysed using spectrophotometry, fluorometry and high performance liquid chromatography (HPLC). The Australian Chlorophyll a database is freely available through the Australian Ocean Data Network portal (http://imos.aodn.org.au/). These data can be used in isolation as an index of phytoplankton biomass or in combination with other data to provide insight into water quality, ecosystem state, and / or the relationships with other trophic levels such as zooplankton or fish.
Facebook
TwitterAs of June 2024, the most popular database management system (DBMS) worldwide was Oracle, with a ranking score of *******; MySQL and Microsoft SQL server rounded out the top three. Although the database management industry contains some of the largest companies in the tech industry, such as Microsoft, Oracle and IBM, a number of free and open-source DBMSs such as PostgreSQL and MariaDB remain competitive. Database Management Systems As the name implies, DBMSs provide a platform through which developers can organize, update, and control large databases. Given the business world’s growing focus on big data and data analytics, knowledge of SQL programming languages has become an important asset for software developers around the world, and database management skills are seen as highly desirable. In addition to providing developers with the tools needed to operate databases, DBMS are also integral to the way that consumers access information through applications, which further illustrates the importance of the software.