Facebook
TwitterThis is the sample database from sqlservertutorial.net. This is a great dataset for learning SQL and practicing querying relational databases.
Database Diagram:
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4146319%2Fc5838eb006bab3938ad94de02f58c6c1%2FSQL-Server-Sample-Database.png?generation=1692609884383007&alt=media" alt="">
The sample database is copyrighted and cannot be used for commercial purposes. For example, it cannot be used for the following but is not limited to the purposes: - Selling - Including in paid courses
Facebook
TwitterThe Sakila sample database is a fictitious database designed to represent a DVD rental store. The tables of the database include film, film_category, actor, customer, rental, payment and inventory among others. The Sakila sample database is intended to provide a standard schema that can be used for examples in books, tutorials, articles, samples, and so forth. Detailed information about the database can be found on the MySQL website: https://dev.mysql.com/doc/sakila/en/
Sakila for SQLite is a part of the sakila-sample-database-ports project intended to provide ported versions of the original MySQL database for other database systems, including:
Sakila for SQLite is a port of the Sakila example database available for MySQL, which was originally developed by Mike Hillyer of the MySQL AB documentation team. This project is designed to help database administrators to decide which database to use for development of new products The user can run the same SQL against different kind of databases and compare the performance
License: BSD Copyright DB Software Laboratory http://www.etl-tools.com
Note: Part of the insert scripts were generated by Advanced ETL Processor http://www.etl-tools.com/etl-tools/advanced-etl-processor-enterprise/overview.html
Information about the project and the downloadable files can be found at: https://code.google.com/archive/p/sakila-sample-database-ports/
Other versions and developments of the project can be found at: https://github.com/ivanceras/sakila/tree/master/sqlite-sakila-db
https://github.com/jOOQ/jOOQ/tree/main/jOOQ-examples/Sakila
Direct access to the MySQL Sakila database, which does not require installation of MySQL (queries can be typed directly in the browser), is provided on the phpMyAdmin demo version website: https://demo.phpmyadmin.net/master-config/
The files in the sqlite-sakila-db folder are the script files which can be used to generate the SQLite version of the database. For convenience, the script files have already been run in cmd to generate the sqlite-sakila.db file, as follows:
sqlite> .open sqlite-sakila.db # creates the .db file
sqlite> .read sqlite-sakila-schema.sql # creates the database schema
sqlite> .read sqlite-sakila-insert-data.sql # inserts the data
Therefore, the sqlite-sakila.db file can be directly loaded into SQLite3 and queries can be directly executed. You can refer to my notebook for an overview of the database and a demonstration of SQL queries. Note: Data about the film_text table is not provided in the script files, thus the film_text table is empty. Instead the film_id, title and description fields are included in the film table. Moreover, the Sakila Sample Database has many versions, so an Entity Relationship Diagram (ERD) is provided to describe this specific version. You are advised to refer to the ERD to familiarise yourself with the structure of the database.
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
This dataset contains data of biological samples which were collected during scientific missions of JAMSTEC ships (NATSUSHIMA, KAIYO, YOKOSUKA, KAIREI and MIRAI) and submersibles.Data of this dataset is derived from the Marine Biological Sample Database of JAMSTEC. At the original database, you may search sample information such as number of individuals, preservation methods, sex, life stages, identification, collecting information and related literatures.
Facebook
TwitterMySQL Sample Database Schema. The MySQL sample database schema consists of the following tables:
customers: stores customer’s data.
products: stores a list of scale model cars.
productlines: stores a list of product lines.
orders: stores sales orders placed by customers.
orderdetails: stores sales order line items for every sales order.
payments: stores payments made by customers based on their accounts.
employees: stores employee information and the organization structure such as who reports to whom.
offices: stores sales office data.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The spearfish sample database is being distributed to provide users with a solid database on which to work for learning the tools of GRASS. This document provides some general information about the database and the map layers available. With the release of GRASS 4.1, the GRASS development staff is pleased to announce that the sample data set spearfish is also being distributed. The spearfish data set covers two topographic 1:24,000 quads in western South Dakota. The names of the quads are Spearfish and Deadwood North, SD. The area covered by the data set is in the vicinity of Spearfish, SD and includes a majority of the Black Hills National Forest (i.e., Mount Rushmore). It is anticipated that enough data layers will be provided to allow users to use nearly all of the GRASS tools on the spearfish data set. A majority of this spearfish database was initially provided to USACERL by the EROS Data Center (EDC) in Sioux Falls, SD. The GRASS Development staff expresses acknowledgement and thanks to: the U.S. Geological Survey (USGS) and EROS Data Center for allowing us to distribute this data with our release of GRASS software; and to the U.S. Census Bureau for their samples of TIGER/Line data and the STF1 data which were used in the development of the TIGER programs and tutorials. Thanks also to SPOT Image Corporation for providing multispectral and panchromatic satellite imagery for a portion of the spearfish data set and for allowing us to distribute this imagery with GRASS software. In addition to the data provided by the EDC and SPOT, researchers at USACERL have dev eloped several new layers, thus enhancing the spearfish data set. To use the spearfish data, when entering GRASS, enter spearfish as your choice for the current location.
This is the classical GRASS GIS dataset from 1993 covering a part of Spearfish, South Dakota, USA, with raster, vector and point data. The Spearfish data base covers two 7.5 minute topographic sheets in the northern Black Hills of South Dakota, USA. It is in the Universal Transverse Mercator Projection. It was originally created by Larry Batten while he was with the U. S. Geological Survey's EROS Data Center in South Dakota. The data base was enhanced by USA/CERL and cooperators.
Facebook
TwitterIn order to practice writing SQL queries in a semi-realistic database, I discovered and imported Microsoft's AdventureWorks sample database into Microsoft SQL Server Express. The Adventure Works [fictious] company represents a bicycle manufacturer that sells bicycles and accessories to global markets. Queries were written for developing and testing a Tableau dashboard.
The dataset presented here represents a fraction of the entire manufacturing relational database. Tables within the dataset include product, purchasing, work order, and transaction data.
The full database sample can be found on Microsoft SQL Docs website: https://learn.microsoft.com/en-us/sql/samples/ and additionally on Github: https://github.com/microsoft/sql-server-samples
Facebook
TwitterThe Biological Sampling Database (BSD) is an Oracle relational database that is maintained at the NMFS Panama City Laboratory and NOAA NMFS Beaufort Laboratory. Data set includes port samples of reef fish species collected from commercial and recreational fishery landings in the U.S. South Atlantic (NC - FL Keys). The data set serves as an inventory of samples stored at the NMFS Beaufort Laboratory as well as final processed data. Information that may be inlcuded for each sample is trip level information, species, size meansurements, age, sex and reproductive data.
Facebook
TwitterSample dataset associated with report of same name. Past river restoration projects in a variety of programs across all of Reclamation’s regions were evaluated to ascertain the best method of presenting this data. At the end of this project, this dataset was presented to the Enterprise Asset Registry team to be incorporated to the Fish Structures Asset Class layer. Therefore, all data associated with this spreadsheet lives within the Enterprise Asset Registry's geospatial Fish Structures Asset Class layer.
Facebook
TwitterIn 1968, the Missouri Geological Survey (MGS) established the Operation Basement program to address three objectives: a) to obtain drill hole and underground mining data relative to the structure and composition of the buried Precambrian basement; b) to expand mapping in the Precambrian outcrop area and conduct research related to Precambrian geology and mineral resources; and c) to eventually publish the results of the first two objectives in the Contribution to Precambrian Geology series (Kisvarsanyi, 1976). The database presented here represents the first of those objectives, and it includes more data that was gathered after the third objective was accomplished. It was originally compiled in close cooperation with exploration and mining companies operating in Missouri, who provided drillhole data, core and rock samples to MGS. These data enabled geologists to study otherwise unexposed basement rocks from a large area of the state for the first time, allowing better classification and understanding of the Precambrian basement across the state. MGS is continuing data collection and database compilation today as information becomes available, furthering our knowledge of the Missouri Precambrian basement. This effort was supported through a cooperative agreement with the Mineral Resource Program of the U.S. Geological Survey. There is no plan to update this Data Release product.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Anveshak Rock Sample Database is a dataset for object detection tasks - it contains R annotations for 904 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterData set consists of port samples of gulf and Atlantic menhaden from the reduction purse-seine fisheries: data include specimen fork length, weight and age (yrs), as well as date and location of catch.
Facebook
TwitterU.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
This database contains a comprehensive inventory of geologic (coral, coral reef, limestone, and sediment) cores and samples collected, analyzed, published, and/or archived by, or in collaboration with, the U.S. Geological Survey St. Petersburg Coastal and Marine Science Center (USGS SPCMSC). The SPCMSC Geologic Core and Sample Database includes geologic cores and samples collected beginning in the 1970s to present day, from study sites across the world. This database captures metadata about samples throughout the USGS Science Data Lifecycle: including field collection, laboratory analysis, publication of research, and archival or deaccession. For more information about the USGS Science Data Lifecycle, see USGS Open-File Report 2013-1265 (https://doi.org/10.3133/ofr20131265). The SPCMSC Geologic Core and Sample Database also includes storage locations for physical samples and cores archived in a repository (USGS SPCMSC or elsewhere, if known). The majority of the samples and cores ...
Facebook
TwitterThe NIS is the largest publicly available all-payer inpatient healthcare database designed to produce U.S. regional and national estimates of inpatient utilization, access, cost, quality, and outcomes. Unweighted, it contains data from around 7 million hospital stays each year. Weighted, it estimates around 35 million hospitalizations nationally. Developed through a Federal-State-Industry partnership sponsored by the Agency for Healthcare Research and Quality (AHRQ), HCUP data inform decision making at the national, State, and community levels.
Its large sample size is ideal for developing national and regional estimates and enables analyses of rare conditions, uncommon treatments, and special populations.
%3Cu%3EDO NOT%3C/u%3E
use this data without referring to the NIS Database Documentation, which includes:
%3C!-- --%3E
%3C!-- --%3E
%3Cu%3E%3Cstrong%3EAll manuscripts%3C/strong%3E%3C/u%3E
(and other items you'd like to publish) %3Cu%3E%3Cstrong%3Emust be submitted to%3C/strong%3E%3C/u%3E
%3Cu%3E%3Cstrong%3Ephsdatacore@stanford.edu%3C/strong%3E%3C/u%3E
for approval prior to journal submission.
We will check your cell sizes and citations.
For more information about how to cite PHS and PHS datasets, please visit:
https:/phsdocs.developerhub.io/need-help/citing-phs-data-core
You must also %3Cu%3E%3Cstrong%3Emake sure that your work meets all of the AHRQ (data owner) requirements for publishing%3C/strong%3E%3C/u%3E
with HCUP data--listed at https://hcup-us.ahrq.gov/db/nation/nis/nischecklist.jsp
For additional assistance, AHRQ has created the HCUP Online Tutorial Series, a series of free, interactive courses which provide training on technical methods for conducting research with HCUP data. Topics include an HCUP Overview Course and these tutorials:
• The HCUP Sampling Design tutorial is designed to help users learn how to account for sample design in their work with HCUP national (nationwide) databases. • The Producing National HCUP Estimates tutorial is designed to help users understand how the three national (nationwide) databases – the NIS, Nationwide Emergency Department Sample (NEDS), and Kids' Inpatient Database (KID) – can be used to produce national and regional estimates. HCUP 2020 NIS (8/22/22) 14 Introduction • The Calculating Standard Errors tutorial shows how to accurately determine the precision of the estimates produced from the HCUP nationwide databases. Users will learn two methods for calculating standard errors for estimates produced from the HCUP national (nationwide) databases. • The HCUP Multi-year Analysis tutorial presents solutions that may be necessary when conducting analyses that span multiple years of HCUP data. • The HCUP Software Tools Tutorial provides instructions on how to apply the AHRQ software tools to HCUP or other administrative databases.
New tutorials are added periodically, and existing tutorials are updated when necessary. The Online Tutorial Series is located on the HCUP-US website at https://hcup-us.ahrq.gov/tech_assist/tutorials.jsp
In 2015, AHRQ restructured the data as described here:
https://hcup-us.ahrq.gov/db/nation/nis/2015HCUPNationalInpatientSample.pdf
Some key points:
Facebook
TwitterThis research project aimed to create a river restoration database to collect information about projects that have already been implemented and to inform future rehabilitation designs for fish and aquatic species recovery under the Endangered Species Act. Past river restoration projects in a variety of programs across all of Reclamation’s regions were evaluated to ascertain the best method of presenting this data. At the end of this project, this data was presented to the Enterprise Asset Registry team to be incorporated to the Fish Structures Asset Class layer. As the Fish Structures Asset Class continues to develop, river restoration data will be added and continue to remain up to date as part of the Enterprise Asset Registry Project, resulting in the living dataset for all of Reclamation to use as a resource.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Example database used for training and predicting.
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
This dataset is comprised of a collection of example DMPs from a wide array of fields; obtained from a number of different sources outlined in the README. Data included/extracted from the examples included the discipline and field of study, author, institutional affiliation and funding information, location, date modified, title, research and data-type, description of project, link to the DMP, and where possible external links to related publications, grant pages, or French language versions. This CSV document serves as the content for a McMaster Data Management Plan (DMP) Database as part of the Research Data Management (RDM) Services website, located at https://u.mcmaster.ca/dmps. Other universities and organizations are encouraged to link to the DMP Database or use this dataset as the content for their own DMP Database. This dataset will be updated regularly to include new additions and will be versioned as such. We are gathering submissions at https://u.mcmaster.ca/submit-a-dmp to continue to expand the collection.
Facebook
TwitterU.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
This dataset is a compilation of data obtained from the Idaho Department of Water Quality, the Idaho Department of Water Resources, and the Water Quality Portal. The 'Samples' table stores information about individual groundwater samples, including what was being sampled, when it was sampled, the results of the sample, etc. This table is related to the 'MonitoringLocation' table (which contains information about the well being sampled).
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This dataset provides all the necessary files to set up the AWS Tickit database. It includes one SQL source file, a folder of .csv files and a folder of .txt files. Each can be used to create the database based on user preferences.
The database consists of seven tables: two fact tables and five dimensions. The two fact tables each contain less than 200,000 rows, and the dimensions range from 11 rows in the CATEGORY table up to about 50,000 rows in the USERS table.
This dataset is ideal for practicing SQL operations, setting up data pipelines, and learning how to integrate different file formats for database initialization.
Facebook
TwitterAge and length frequency data for finfish and invertebrate species collected during commercial fishing vessels. Samples are collected by fisheries reporting specialist from fish dealers in ports along the northwest Atlantic Ocean from Maine to North Carolina.
Facebook
Twitternurulakbaral/sample-databases dataset hosted on Hugging Face and contributed by the HF Datasets community
Facebook
TwitterThis is the sample database from sqlservertutorial.net. This is a great dataset for learning SQL and practicing querying relational databases.
Database Diagram:
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4146319%2Fc5838eb006bab3938ad94de02f58c6c1%2FSQL-Server-Sample-Database.png?generation=1692609884383007&alt=media" alt="">
The sample database is copyrighted and cannot be used for commercial purposes. For example, it cannot be used for the following but is not limited to the purposes: - Selling - Including in paid courses