6 datasets found
  1. Bike Store Relational Database | SQL

    • kaggle.com
    zip
    Updated Aug 21, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dillon Myrick (2023). Bike Store Relational Database | SQL [Dataset]. https://www.kaggle.com/datasets/dillonmyrick/bike-store-sample-database
    Explore at:
    zip(94412 bytes)Available download formats
    Dataset updated
    Aug 21, 2023
    Authors
    Dillon Myrick
    Description

    This is the sample database from sqlservertutorial.net. This is a great dataset for learning SQL and practicing querying relational databases.

    Database Diagram:

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4146319%2Fc5838eb006bab3938ad94de02f58c6c1%2FSQL-Server-Sample-Database.png?generation=1692609884383007&alt=media" alt="">

    Terms of Use

    The sample database is copyrighted and cannot be used for commercial purposes. For example, it cannot be used for the following but is not limited to the purposes: - Selling - Including in paid courses

  2. Z

    Data from: Atlas of European Eel Distribution (Anguilla anguilla) in...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jul 12, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mateo, Maria; Drouineau, Hilaire; Pella, Herve; Beaulaton, Laurent; Amilhat, Elsa; Bardonnet, Agnès; Domingos, Isabel; Fernández-Delgado, Carlos; De Miguel Rubio, Ramon; Herrera, Mercedes; Korta, Maria; Zamora, Lluis; Díaz, Estibalitz; Briand, Cédric (2024). Atlas of European Eel Distribution (Anguilla anguilla) in Portugal, Spain and France [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_6021837
    Explore at:
    Dataset updated
    Jul 12, 2024
    Dataset provided by
    University of Córdoba
    EPTB-Vilaine
    OFB
    FCUL/MARE
    INRAe
    University of Perpignan
    University of Girona
    AZTI
    Authors
    Mateo, Maria; Drouineau, Hilaire; Pella, Herve; Beaulaton, Laurent; Amilhat, Elsa; Bardonnet, Agnès; Domingos, Isabel; Fernández-Delgado, Carlos; De Miguel Rubio, Ramon; Herrera, Mercedes; Korta, Maria; Zamora, Lluis; Díaz, Estibalitz; Briand, Cédric
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Spain, France, Portugal
    Description

    DESCRIPTION

    VERSIONS

    version1.0.1 fixes problem with functions

    version1.0.2 added table dbeel_rivers.rn_rivermouth with GEREM basin, distance to Gibraltar and link to CCM.

    version1.0.3 fixes problem with functions

    version1.0.4 adds views rn_rna and rn_rne to the database

    The SUDOANG project aims at providing common tools to managers to support eel conservation in the SUDOE area (Spain, France and Portugal). VISUANG is the SUDOANG Interactive Web Application that host all these tools . The application consists of an eel distribution atlas (GT1), assessments of mortalities caused by turbines and an atlas showing obstacles to migration (GT2), estimates of recruitment and exploitation rate (GT3) and escapement (chosen as a target by the EC for the Eel Management Plans) (GT4). In addition, it includes an interactive map showing sampling results from the pilot basin network produced by GT6.

    The eel abundance for the eel atlas and escapement has been obtained using the Eel Density Analysis model (EDA, GT4's product). EDA extrapolates the abundance of eel in sampled river segments to other segments taking into account how the abundance, sex and size of the eels change depending on different parameters. Thus, EDA requires two main data sources: those related to the river characteristics and those related to eel abundance and characteristics.

    However, in both cases, data availability was uneven in the SUDOE area. In addition, this information was dispersed among several managers and in different formats due to different sampling sources: Water Framework Directive (WFD), Community Framework for the Collection, Management and Use of Data in the Fisheries Sector (EUMAP), Eel Management Plans, research groups, scientific papers and technical reports. Therefore, the first step towards having eel abundance estimations including the whole SUDOE area, was to have a joint river and eel database. In this report we will describe the database corresponding to the river’s characteristics in the SUDOE area and the eel abundances and their characteristics.

    In the case of rivers, two types of information has been collected:

    River topology (RN table): a compilation of data on rivers and their topological and hydrographic characteristics in the three countries.

    River attributes (RNA table): contains physical attributes that have fed the SUDOANG models.

    The estimation of eel abundance and characteristic (size, biomass, sex-ratio and silver) distribution at different scales (river segment, basin, Eel Management Unit (EMU), and country) in the SUDOE area obtained with the implementation of the EDA2.3 model has been compiled in the RNE table (eel predictions).

    CURRENT ACTIVE PROJECT

    The project is currently active here : gitlab forgemia

    TECHNICAL DESCRIPTION TO BUILD THE POSTGRES DATABASE

    1. Build the database in postgres.

    All tables are in ESPG:3035 (European LAEA). The format is postgreSQL database. You can download other formats (shapefiles, csv), here SUDOANG gt1 database.

    Initial command

    open a shell with command CMD

    Move to the place where you have downloaded the file using the following command

    cd c:/path/to/my/folder

    note psql must be accessible, in windows you can add the path to the postgres

    bin folder, otherwise you need to add the full path to the postgres bin folder see link to instructions below

    createdb -U postgres eda2.3 psql -U postgres eda2.3

    this will open a command with # where you can launch the commands in the next box

    Within the psql command

    create extension "postgis"; create extension "dblink"; create extension "ltree"; create extension "tablefunc"; create schema dbeel_rivers; create schema france; create schema spain; create schema portugal; -- type \q to quit the psql shell

    Now the database is ready to receive the differents dumps. The dump file are large. You might not need the part including unit basins or waterbodies. All the tables except waterbodies and unit basins are described in the Atlas. You might need to understand what is inheritance in a database. https://www.postgresql.org/docs/12/tutorial-inheritance.html

    1. RN (riversegments)

    These layers contain the topology (see Atlas for detail)

    dbeel_rivers.rn

    france.rn

    spain.rn

    portugal.rn

    Columns (see Atlas)

        gid
    
    
        idsegment
    
    
        source
    
    
        target
    
    
        lengthm
    
    
        nextdownidsegment
    
    
        path
    
    
        isfrontier
    
    
        issource
    
    
        seaidsegment
    
    
        issea
    
    
        geom
    
    
        isendoreic
    
    
        isinternational
    
    
        country
    

    dbeel_rivers.rn_rivermouth

        seaidsegment
    
    
        geom (polygon)
    
    
        gerem_zone_3
    
    
        gerem_zone_4 (used in EDA)
    
    
        gerem_zone_5
    
    
        ccm_wso_id
    
    
        country
    
    
        emu_name_short
    
    
        geom_outlet (point)
    
    
        name_basin
    
    
        dist_from_gibraltar_km
    
    
        name_coast
    
    
        basin_name
    

    dbeel_rivers.rn ! mandatory => table at the international level from which

    the other table inherit

    even if you don't want to use other countries

    (In many cases you should ... there are transboundary catchments) download this first.

    the rn network must be restored firt !

    table rne and rna refer to it by foreign keys.

    pg_restore -U postgres -d eda2.3 "dbeel_rivers.rn.backup"

    france

    pg_restore -U postgres -d eda2.3 "france.rn.backup"

    spain

    pg_restore -U postgres -d eda2.3 "spain.rn.backup"

    portugal

    pg_restore -U postgres -d eda2.3 "portugal.rn.backup"

    rivermouth and basins, this file contains GEREM basins, distance to Gibraltar, the link to CCM id

    for each basin flowing to the sea. pg_restore -U postgres -d eda2.3 "dbeel_rivers.rn_rivermouth.backup"

    with the schema you will probably want to be able to use the functions, but launch this only after

    restoring rna in the next step

    psql -U postgres -d eda2.3 -f "function_dbeel_rivers.sql"

    1. RNA (Attributes)

    This corresponds to tables

    dbeel_rivers.rna

    france.rna

    spain.rna

    portugal.rna

    Columns (See Atlas)

        idsegment
    
    
        altitudem
    
    
        distanceseam
    
    
        distancesourcem
    
    
        cumnbdam
    
    
        medianflowm3ps
    
    
        surfaceunitbvm2
    
    
        surfacebvm2
    
    
        strahler
    
    
        shreeve
    
    
        codesea
    
    
        name
    
    
        pfafriver
    
    
        pfafsegment
    
    
        basin
    
    
        riverwidthm
    
    
        temperature
    
    
        temperaturejan
    
    
        temperaturejul
    
    
        wettedsurfacem2
    
    
        wettedsurfaceotherm2
    
    
        lengthriverm
    
    
        emu
    
    
        cumheightdam
    
    
        riverwidthmsource
    
    
        slope
    
    
        dis_m3_pyr_riveratlas
    
    
        dis_m3_pmn_riveratlas
    
    
        dis_m3_pmx_riveratlas
    
    
        drought
    
    
        drought_type_calc
    

    Code :

    pg_restore -U postgres -d eda2.3 "dbeel_rivers.rna.backup" pg_restore -U postgres -d eda2.3 "france.rna.backup" pg_restore -U postgres -d eda2.3 "spain.rna.backup"
    pg_restore -U postgres -d eda2.3 "portugal.rna.backup"

    1. RNE (eel predictions)

    These layers contain eel data (see Atlas for detail)

    dbeel_rivers.rne

    france.rne

    spain.rne

    portugal.rne

    Columns (see Atlas)

        idsegment
    
    
        surfaceunitbvm2
    
    
        surfacebvm2
    
    
        delta
    
    
        gamma
    
    
        density
    
    
        neel
    
    
        beel
    
    
        peel150
    
    
        peel150300
    
    
        peel300450
    
    
        peel450600
    
    
        peel600750
    
    
        peel750
    
    
        nsilver
    
    
        bsilver
    
    
        psilver150300
    
    
        psilver300450
    
    
        psilver450600
    
    
        psilver600750
    
    
        psilver750
    
    
        psilver
    
    
        pmale150300
    
    
        pmale300450
    
    
        pmale450600
    
    
        pfemale300450
    
    
        pfemale450600
    
    
        pfemale600750
    
    
        pfemale750
    
    
        pmale
    
    
        pfemale
    
    
        sex_ratio
    
    
        cnfemale300450
    
    
        cnfemale450600
    
    
        cnfemale600750
    
    
        cnfemale750
    
    
        cnmale150300
    
    
        cnmale300450
    
    
        cnmale450600
    
    
        cnsilver150300
    
    
        cnsilver300450
    
    
        cnsilver450600
    
    
        cnsilver600750
    
    
        cnsilver750
    
    
        cnsilver
    
    
        delta_tr
    
    
        gamma_tr
    
    
        type_fit_delta_tr
    
    
        type_fit_gamma_tr
    
    
        density_tr
    
    
        density_pmax_tr
    
    
        neel_pmax_tr
    
    
        nsilver_pmax_tr
    
    
        density_wd
    
    
        neel_wd
    
    
        beel_wd
    
    
        nsilver_wd
    
    
        bsilver_wd
    
    
        sector_tr
    
    
        year_tr
    
    
        is_current_distribution_area
    
    
        is_pristine_distribution_area_1985
    

    Code for restauration

    pg_restore -U postgres -d eda2.3 "dbeel_rivers.rne.backup" pg_restore -U postgres -d eda2.3 "france.rne.backup" pg_restore -U postgres -d eda2.3 "spain.rne.backup"
    pg_restore -U postgres -d eda2.3 "portugal.rne.backup"

    1. Unit basins

    Units basins are not described in the Altas. They correspond to the following tables :

    dbeel_rivers.basinunit_bu

    france.basinunit_bu

    spain.basinunit_bu

    portugal.basinunit_bu

    france.basinunitout_buo

    spain.basinunitout_buo

    portugal.basinunitout_buo

    The unit basins is the simple basin that surrounds a segment. It correspond to the topography unit from which unit segment have been calculated. ESPG 3035. Tables bu_unitbv, and bu_unitbvout inherit from dbeel_rivers.unit_bv. The first table intersects with a segment, the second table does not, it corresponds to basin polygons which do not have a riversegment.

    Source :

    Portugal

    https://sniambgeoviewer.apambiente.pt/Geodocs/gml/inspire/HY_PhysicalWaters_DrainageBasinGeoCod.ziphttps://sniambgeoviewer.apambiente.pt/Geodocs/gml/inspire/HY_PhysicalWaters_DrainageBasinGeoCod.zip

    France

    In france unit bv corresponds to the RHT (Pella et al., 2012)

    Spain

    http://www.mapama.gob.es/ide/metadatos/index.html?srv=metadata.show&uuid=898f0ff8-f06c-4c14-88f7-43ea90e48233

    pg_restore -U postgres -d eda2.3 'dbeel_rivers.basinunit_bu.backup'

    france

    pg_restore -U postgres -d eda2.3

  3. d

    PostgreSQL Dump of IMDB Data for JOB Workload

    • search.dataone.org
    • dataverse.harvard.edu
    Updated Nov 22, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Marcus, Ryan (2023). PostgreSQL Dump of IMDB Data for JOB Workload [Dataset]. http://doi.org/10.7910/DVN/2QYZBT
    Explore at:
    Dataset updated
    Nov 22, 2023
    Dataset provided by
    Harvard Dataverse
    Authors
    Marcus, Ryan
    Description

    This is a dump generated by pg_dump -Fc of the IMDb data used in the "How Good are Query Optimizers, Really?" paper. PostgreSQL compatible SQL queries and scripts to automatically create a VM with this dataset can be found here: https://git.io/imdb

  4. Dataset of A Large-scale Study about Quality and Reproducibility of Jupyter...

    • zenodo.org
    application/gzip
    Updated Mar 16, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    João Felipe; João Felipe; Leonardo; Leonardo; Vanessa; Vanessa; Juliana; Juliana (2021). Dataset of A Large-scale Study about Quality and Reproducibility of Jupyter Notebooks / Understanding and Improving the Quality and Reproducibility of Jupyter Notebooks [Dataset]. http://doi.org/10.5281/zenodo.3519618
    Explore at:
    application/gzipAvailable download formats
    Dataset updated
    Mar 16, 2021
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    João Felipe; João Felipe; Leonardo; Leonardo; Vanessa; Vanessa; Juliana; Juliana
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The self-documenting aspects and the ability to reproduce results have been touted as significant benefits of Jupyter Notebooks. At the same time, there has been growing criticism that the way notebooks are being used leads to unexpected behavior, encourages poor coding practices and that their results can be hard to reproduce. To understand good and bad practices used in the development of real notebooks, we analyzed 1.4 million notebooks from GitHub. Based on the results, we proposed and evaluated Julynter, a linting tool for Jupyter Notebooks.

    Papers:

    This repository contains three files:

    Reproducing the Notebook Study

    The db2020-09-22.dump.gz file contains a PostgreSQL dump of the database, with all the data we extracted from notebooks. For loading it, run:

    gunzip -c db2020-09-22.dump.gz | psql jupyter

    Note that this file contains only the database with the extracted data. The actual repositories are available in a google drive folder, which also contains the docker images we used in the reproducibility study. The repositories are stored as content/{hash_dir1}/{hash_dir2}.tar.bz2, where hash_dir1 and hash_dir2 are columns of repositories in the database.

    For scripts, notebooks, and detailed instructions on how to analyze or reproduce the data collection, please check the instructions on the Jupyter Archaeology repository (tag 1.0.0)

    The sample.tar.gz file contains the repositories obtained during the manual sampling.

    Reproducing the Julynter Experiment

    The julynter_reproducility.tar.gz file contains all the data collected in the Julynter experiment and the analysis notebooks. Reproducing the analysis is straightforward:

    • Uncompress the file: $ tar zxvf julynter_reproducibility.tar.gz
    • Install the dependencies: $ pip install julynter/requirements.txt
    • Run the notebooks in order: J1.Data.Collection.ipynb; J2.Recommendations.ipynb; J3.Usability.ipynb.

    The collected data is stored in the julynter/data folder.

    Changelog

    2019/01/14 - Version 1 - Initial version
    2019/01/22 - Version 2 - Update N8.Execution.ipynb to calculate the rate of failure for each reason
    2019/03/13 - Version 3 - Update package for camera ready. Add columns to db to detect duplicates, change notebooks to consider them, and add N1.Skip.Notebook.ipynb and N11.Repository.With.Notebook.Restriction.ipynb.
    2021/03/15 - Version 4 - Add Julynter experiment; Update database dump to include new data collected for the second paper; remove scripts and analysis notebooks from this package (moved to GitHub), add a link to Google Drive with collected repository files

  5. ParkingDB HCMCity PostgreSQL

    • kaggle.com
    zip
    Updated Dec 26, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nghĩa Trung (2024). ParkingDB HCMCity PostgreSQL [Dataset]. https://www.kaggle.com/datasets/ren294/parkingdb-hcmcity-postgres
    Explore at:
    zip(504763600 bytes)Available download formats
    Dataset updated
    Dec 26, 2024
    Authors
    Nghĩa Trung
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    This database supports the "SmartTraffic_Lakehouse_for_HCMC" project, designed to improve traffic management in Ho Chi Minh City by leveraging big data and modern lakehouse architecture.

    This database manages operations for a parking lot system in Ho Chi Minh City, Vietnam, tracking everything from parking records to customer feedback. The database contains operational data for managing parking facilities, including vehicle tracking, payment processing, customer management, and staff scheduling. It's an excellent example of a comprehensive system for managing a modern parking infrastructure, handling different vehicle types (cars, motorbikes, and bicycles) and various payment methods.

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F13779146%2F40c8bbd9fd27a7b9fbe7c77598512cf2%2FParkingTransaction.png?generation=1735218498592627&alt=media" alt="">

    The parking management database includes sample data for the following: - Owner: Customer information including contact details, enabling personalized service and feedback tracking - Vehicle: Detailed vehicle information linked to owners, including license plates, types, colors, and brands - ParkingLot: Information about different parking facilities at shopping malls, including capacity management for different vehicle types and hourly rates - ParkingRecord: Tracks vehicle entry/exit times and calculated parking fees - Payment: Records payment transactions with various payment methods (Cash, E-Wallet) - Feedback: Stores customer ratings and comments about parking services - Promotion: Manages promotional campaigns with discount rates and valid periods - Staff: Manages parking facility employees, including roles, contact information, and shift schedules

    The design reflects real-world requirements for managing complex parking operations in a busy metropolitan area. The system can track occupancy rates, process payments, manage staff schedules, and handle customer relations across multiple locations.

    Note: This database is part of the SmartTraffic_Lakehouse_for_HCMC project, designed to improve urban mobility management in Ho Chi Minh City. All data contained within is simulated for demonstration and development purposes. The project was created by Nguyen Trung Nghia (ren294) and is available on GitHub.

    About my project: - Project: SmartTraffic_Lakehouse_for_HCMC - Author: Nguyen Trung Nghia (ren294) - Contact: trungnghia294@gmail.com - GitHub: Ren294

  6. #Charlottesville on Twitter

    • kaggle.com
    zip
    Updated Aug 21, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    VincentLa (2017). #Charlottesville on Twitter [Dataset]. https://www.kaggle.com/datasets/vincela9/charlottesville-on-twitter/code
    Explore at:
    zip(86541190 bytes)Available download formats
    Dataset updated
    Aug 21, 2017
    Authors
    VincentLa
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Area covered
    Charlottesville
    Description

    Charlottesville, Virgina

    Charlottesville is home to a statue of Robert E. Lee which is slated to be removed. (For those unfamiliar with American history, Robert E. Lee was a US Army general who defected to the Confederacy during the American Civil War and was considered to be one of their best military leaders.) While many Americans support the move, believing the main purpose of the Confederacy was to defend the institution of slavery, many others do not share this view. Furthermore, believing Confederate symbols to be merely an expression of Southern pride, many have not taken its planned removal lightly.

    As a result, many people--including white nationalists and neo-Nazis--have descended to Charlottesville to protest its removal. This in turn attracted many counter-protestors. Tragically, one of the counter-protestors--Heather Heyer--was killed and many others injured after a man intentionally rammed his car into them. In response, President Trump blamed "both sides" for the chaos in Charlottesville, leading many Americans to denounce him for what they see as a soft-handed approach to what some have called an act of "domestic terrorism."

    This dataset below captures the discussion--and copious amounts of anger--revolving around this past week's events.

    The Data

    Description

    This data set consists of a random sample of 50,000 tweets per day (in accordance with the Twitter Developer Agreement) of tweets mentioning Charlottesville or containing "#charlottesville" extracted via the Twitter Streaming API, starting on August 15. The files were copied from a large Postgres database containing--currently--over 2 million tweets. Finally, a table of tweet counts per timestamp was created using the whole database (not just the Kaggle sample). The data description PDF provides a full summary of the attributes found in the CSV files.

    Note: While the tweet timestamps are in UTC, the cutoffs were based on Eastern Standard Time, so the August 16 file will have timestamps ranging from 2017-08-16 4:00:00 UTC to 2017-08-17 4:00:00 UTC.

    Format

    The dataset is available as either separate CSV files or a single SQLite database.

    License

    I'm releasing the dataset under the CC BY-SA 4.0 license. Furthermore, because this data was extracted via the Twitter Streaming API, its use must abide by the Twitter Developer Agreement. Most notably, the display of individual tweets should satisfy these requirements. More information can be found in the data description file, or on Twitter's website.

    Acknowledgements

    Obviously, I would like to thank Twitter for providing a fast and reliable streaming service. I'd also like to thank the developers of the Python programming language, psycopg2, and Postgres for creating amazing software with which this data set would not exist.

    Image Credit

    The banner above is a personal modification of these images:

    Inspiration

    I almost removed the header "inspiration" from this section, because this is a rather sad and dark data set. However, this is preciously why this is an important data set to analyze. Good history books have never shied away from unpleasant events, and never should we.

    This data set provides a rich opportunity for many types of research, including:

    • Natural language processing
    • Sentiment analysis
    • Data visualization

    Furthermore, given the political nature of this dataset, there are a lot of social science questions that can potentially be answered, or at least piqued, by this data.

  7. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Dillon Myrick (2023). Bike Store Relational Database | SQL [Dataset]. https://www.kaggle.com/datasets/dillonmyrick/bike-store-sample-database
Organization logo

Bike Store Relational Database | SQL

Sample database from sqlservertutorial.net for a retail bike store.

Explore at:
zip(94412 bytes)Available download formats
Dataset updated
Aug 21, 2023
Authors
Dillon Myrick
Description

This is the sample database from sqlservertutorial.net. This is a great dataset for learning SQL and practicing querying relational databases.

Database Diagram:

https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4146319%2Fc5838eb006bab3938ad94de02f58c6c1%2FSQL-Server-Sample-Database.png?generation=1692609884383007&alt=media" alt="">

Terms of Use

The sample database is copyrighted and cannot be used for commercial purposes. For example, it cannot be used for the following but is not limited to the purposes: - Selling - Including in paid courses

Search
Clear search
Close search
Google apps
Main menu