Facebook
TwitterThis dataset contains information about housing sales in Nashville, TN such as property, owner, sales, and tax information. The SQL queries I created for Data Cleaning can be found here.
Facebook
TwitterCheck out our data lens page for additional data filtering and sorting options: https://data.cityofnewyork.us/view/i4p3-pe6a
This dataset contains Open Parking and Camera Violations issued by the City of New York. Updates will be applied to this data set on the following schedule:
New or open tickets will be updated weekly (Sunday). Tickets satisfied will be updated daily (Tuesday through Sunday). NOTE: Summonses that have been written-off are indicated by blank financials.
Summons images will not be available during scheduled downtime on Sunday - Monday from 1:00 am to 2:30 am and on Sundays from 5:00 am to 10:00 am.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This dataset was created by Andrew Dolcimascolo-Garrett
Released under MIT
Facebook
TwitterThis dataset was created by Thejeswini.V
Facebook
TwitterThis dataset was created by Parth Mistry 20
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This dataset was created by Najir 0123
Released under MIT
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Library Dataset for SQL Project
Watch Full Video -- https://www.youtube.com/watch?v=6X2-P9fNVvw
Project Files -- https://github.com/najirh/Library-System-Management---P2?tab=readme-ov-file
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This project developed a comprehensive data management system designed to support collaborative groundwater research across institutions by establishing a centralized, structured database for hydrologic time series data. Built on the Observations Data Model (ODM), the system stores time series data and metadata in a relational SQLite database. Key project components included database construction, automation of data formatting and importation, development of analytical and visualization tools, and integration with ArcGIS for geospatial representation. The data import workflow standardizes and validates diverse .csv datasets by aligning them with ODM formatting. A Python-based module was created to facilitate data retrieval, analysis, visualization, and export, while an interactive map feature enables users to explore site-specific data availability. Additionally, a custom ArcGIS script was implemented to generate maps that incorporate stream networks, site locations, and watershed boundaries using DEMs from USGS sources. The system was tested using real-world datasets from groundwater wells and surface water gages across Utah, demonstrating its flexibility in handling diverse formats and parameters. The relational structure enabled efficient querying and visualization, and the developed tools promoted accessibility and alignment with FAIR principles.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Blockchain data query: SQL query ton projects with high trading volume
Facebook
TwitterThis dataset was created by Deepali Sukhdeve
Facebook
TwitterThe sql containing the files and queries used to acquire SQL certification performed in a MYSQL environment and queries performed are listed below for reference sake and to follow along with the output as well to view the queried data.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Open Context (https://opencontext.org) publishes free and open access research data for archaeology and related disciplines. An open source (but bespoke) Django (Python) application supports these data publishing services. The software repository is here: https://github.com/ekansa/open-context-py
The Open Context team runs ETL (extract, transform, load) workflows to import data contributed by researchers from various source relational databases and spreadsheets. Open Context uses PostgreSQL (https://www.postgresql.org) relational database to manage these imported data in a graph style schema. The Open Context Python application interacts with the PostgreSQL database via the Django Object-Relational-Model (ORM).
This database dump includes all published structured data organized used by Open Context (table names that start with 'oc_all_'). The binary media files referenced by these structured data records are stored elsewhere. Binary media files for some projects, still in preparation, are not yet archived with long term digital repositories.
These data comprehensively reflect the structured data currently published and publicly available on Open Context. Other data (such as user and group information) used to run the Website are not included.
IMPORTANT
This database dump contains data from roughly 190+ different projects. Each project dataset has its own metadata and citation expectations. If you use these data, you must cite each data contributor appropriately, not just this Zenodo archived database dump.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Comprehensive open source project metrics including contributor activity, popularity trends, development velocity, and security assessments for Go-MySQL-Driver.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Open Context (https://opencontext.org) publishes free and open access research data for archaeology and related disciplines. An open source (but bespoke) Django (Python) application supports these data publishing services. The software repository is here: https://github.com/ekansa/open-context-py
The Open Context team runs ETL (extract, transform, load) workflows to import data contributed by researchers from various source relational databases and spreadsheets. Open Context uses PostgreSQL (https://www.postgresql.org) relational database to manage these imported data in a graph style schema. The Open Context Python application interacts with the PostgreSQL database via the Django Object-Relational-Model (ORM).
In 2023, the Open Context team finished migration of from a legacy database schema to a revised and refactored database schema with stricter referential integrity and better consistency across tables. During this process, the Open Context team de-duplicated records, cleaned some metadata, and redacted attribute data left over from records that had been incompletely deleted in the legacy schema.
This database dump includes all Open Context data organized with the legacy schema (table names that start with the 'oc_' or 'link_' prefixes) along with all Open Context data after cleanup and migration to the new database schema (table names that start with 'oc_all_'). The binary media files referenced by these structured data records are stored elsewhere. Binary media files for some projects, still in preparation, are not yet archived with long term digital repositories.
These data comprehensively reflect the structured data currently published and publicly available on Open Context. Other data (such as user and group information) used to run the Website are not included.
IMPORTANT
This database dump contains data from roughly 180 different projects. Each project dataset has its own metadata and citation expectations. If you use these data, you must cite each data contributor appropriately, not just this Zenodo archived database dump.
Facebook
TwitterAutoTrain Dataset for project: sql-injection
Dataset Description
This dataset has been automatically processed by AutoTrain for project sql-injection.
Languages
The BCP-47 code for the dataset's language is unk.
Dataset Structure
Data Instances
A sample from this dataset looks as follows: [ { "feat_Data": "1' where 5230 = 5230", "target": 1 }, { "feat_Data": "1'+ ( select ouhd where 8905 = 8905 and 6510 =… See the full description on the dataset page: https://huggingface.co/datasets/firdhokk/autotrain-data-sql-injection.
Facebook
TwitterSplitgraph serves as an HTTP API that lets you run SQL queries directly on this data to power Web applications. For example:
See the Splitgraph documentation for more information.
Facebook
TwitterThis feature layer displays historical project locations as tracked by engineering staff at Leon County Public Works. This feature layer is a subset (view) of a parent hosted feature layer that is updated twice daily from a cloud (Azure) hosted SQL database that is administered by Leon County Applications team and maintained by engineering staff at Leon County Public Works.
Facebook
TwitterThe BlueBio database is a comprehensive and robust compilation of international and national research projects active in the years 2003-2022 in Fisheries, Aquaculture, Seafood Processing and Marine Biotechnology. Based on the COFASP projects’ database, it was implemented within the ERA-NET Cofund BlueBio project through a 3-years data collection including 3 surveys (.xlsx file) and a wide data retrieval. After being integrated, data were harmonised, shared as open and disseminated through a WebGIS (https://bluebioeconomy.eu/the-bluebio-projects-online-database/). The WebGIS was key for data update and validation as users were enabled to add new projects and edit existing ones.The database consists of 3,761 “georeferenced” projects, described by 22 parameters that are clustered into textual and spatial, some directly collected while others deduced. The database is a living archive to inform actors of the Blue Bioeconomy sector in a period of rapid transformations and research needs. The selection of projects to be included in the database was based on a combination of keywords previously identified by database administrators within the title, abstract, and, when available, keywords.The complete BlueBio database information backup (.sql file) should be restored in Postgre SQL. Collected research projects are also presented in a more user friendly .csv file, along with the R code used for its extraction.As compared to the previous version, the database has been further enhanced by adding projects related to the years 2020-2022, as well as finalizing the harmonization of the data to achieve a more detailed and accurate allocation of projects.The related report MS37 refers on the results of a series of exploratory analyses aimed to describe the information contained in the BlueBio research projects’ database.* corresponding author: Anna Nora Tassetti, annanora.tassetti@cnr.it
Facebook
TwitterI completed a PostgreSQL project to hone my SQL abilities. Following a tutorial video, I worked on a music store data analysis. In the project, I used SQL to answer several queries about the music shop company.
Facebook
TwitterReported data only. Data unavailable for Fiscal Year 2015-2016.
Splitgraph serves as an HTTP API that lets you run SQL queries directly on this data to power Web applications. For example:
See the Splitgraph documentation for more information.
Facebook
TwitterThis dataset contains information about housing sales in Nashville, TN such as property, owner, sales, and tax information. The SQL queries I created for Data Cleaning can be found here.