33 datasets found
  1. s

    Seair Exim Solutions

    • seair.co.in
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Seair Exim, Seair Exim Solutions [Dataset]. https://www.seair.co.in
    Explore at:
    .bin, .xml, .csv, .xlsAvailable download formats
    Dataset provided by
    Seair Info Solutions PVT LTD
    Authors
    Seair Exim
    Area covered
    India
    Description

    Subscribers can find out export and import data of 23 countries by HS code or product’s name. This demo is helpful for market analysis.

  2. s

    Seair Exim Solutions

    • seair.co.in
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Seair Exim, Seair Exim Solutions [Dataset]. https://www.seair.co.in
    Explore at:
    .bin, .xml, .csv, .xlsAvailable download formats
    Dataset provided by
    Seair Info Solutions PVT LTD
    Authors
    Seair Exim
    Area covered
    India
    Description

    Subscribers can find out export and import data of 23 countries by HS code or product’s name. This demo is helpful for market analysis.

  3. a

    Chatham County - Fiber Lines

    • arc-gis-hub-home-arcgishub.hub.arcgis.com
    • hub.arcgis.com
    • +1more
    Updated Oct 31, 2016
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Chatham County GIS Portal (2016). Chatham County - Fiber Lines [Dataset]. https://arc-gis-hub-home-arcgishub.hub.arcgis.com/maps/ChathamncGIS::chatham-county-fiber-lines
    Explore at:
    Dataset updated
    Oct 31, 2016
    Dataset authored and provided by
    Chatham County GIS Portal
    Area covered
    Description

    Line features representing existing and proposed fiber lines (buried & aerial) owned by Chatham County MIS in Chatham County, NC. These fiber lines are utilized for the NC811 notification system that Chatham County MIS participates in.

    The buried fiber line features are utilized to create a buffer polygon that is imported into NC811's notification system serving as the basis for all Chatham County MIS notifications. The original data was collected by Performance Cabling using industry standard GPS collection methods. The data was delivered to Chatham County MIS / GIS in May of 2015. The data was imported into the ChathamGIS SQL database in August 2015 and stored in the "infrastructure" dataset.

    The ongoing data updates and maintenance will be conducted by Chatham County GIS in collaboration with Performance Cabling on an as needed basis.Chatham GIS SOP: "MAPSERV-74"

  4. d

    Current Population Survey (CPS)

    • search.dataone.org
    • dataverse.harvard.edu
    Updated Nov 21, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Damico, Anthony (2023). Current Population Survey (CPS) [Dataset]. http://doi.org/10.7910/DVN/AK4FDD
    Explore at:
    Dataset updated
    Nov 21, 2023
    Dataset provided by
    Harvard Dataverse
    Authors
    Damico, Anthony
    Description

    analyze the current population survey (cps) annual social and economic supplement (asec) with r the annual march cps-asec has been supplying the statistics for the census bureau's report on income, poverty, and health insurance coverage since 1948. wow. the us census bureau and the bureau of labor statistics ( bls) tag-team on this one. until the american community survey (acs) hit the scene in the early aughts (2000s), the current population survey had the largest sample size of all the annual general demographic data sets outside of the decennial census - about two hundred thousand respondents. this provides enough sample to conduct state- and a few large metro area-level analyses. your sample size will vanish if you start investigating subgroups b y state - consider pooling multiple years. county-level is a no-no. despite the american community survey's larger size, the cps-asec contains many more variables related to employment, sources of income, and insurance - and can be trended back to harry truman's presidency. aside from questions specifically asked about an annual experience (like income), many of the questions in this march data set should be t reated as point-in-time statistics. cps-asec generalizes to the united states non-institutional, non-active duty military population. the national bureau of economic research (nber) provides sas, spss, and stata importation scripts to create a rectangular file (rectangular data means only person-level records; household- and family-level information gets attached to each person). to import these files into r, the parse.SAScii function uses nber's sas code to determine how to import the fixed-width file, then RSQLite to put everything into a schnazzy database. you can try reading through the nber march 2012 sas importation code yourself, but it's a bit of a proc freak show. this new github repository contains three scripts: 2005-2012 asec - download all microdata.R down load the fixed-width file containing household, family, and person records import by separating this file into three tables, then merge 'em together at the person-level download the fixed-width file containing the person-level replicate weights merge the rectangular person-level file with the replicate weights, then store it in a sql database create a new variable - one - in the data table 2012 asec - analysis examples.R connect to the sql database created by the 'download all microdata' progr am create the complex sample survey object, using the replicate weights perform a boatload of analysis examples replicate census estimates - 2011.R connect to the sql database created by the 'download all microdata' program create the complex sample survey object, using the replicate weights match the sas output shown in the png file below 2011 asec replicate weight sas output.png statistic and standard error generated from the replicate-weighted example sas script contained in this census-provided person replicate weights usage instructions document. click here to view these three scripts for more detail about the current population survey - annual social and economic supplement (cps-asec), visit: the census bureau's current population survey page the bureau of labor statistics' current population survey page the current population survey's wikipedia article notes: interviews are conducted in march about experiences during the previous year. the file labeled 2012 includes information (income, work experience, health insurance) pertaining to 2011. when you use the current populat ion survey to talk about america, subract a year from the data file name. as of the 2010 file (the interview focusing on america during 2009), the cps-asec contains exciting new medical out-of-pocket spending variables most useful for supplemental (medical spending-adjusted) poverty research. confidential to sas, spss, stata, sudaan users: why are you still rubbing two sticks together after we've invented the butane lighter? time to transition to r. :D

  5. n

    Kentucky Drug and Sex Crimes

    • narcis.nl
    • data.mendeley.com
    Updated Oct 8, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ahmed, S (via Mendeley Data) (2021). Kentucky Drug and Sex Crimes [Dataset]. http://doi.org/10.17632/ykwnrjm7f7.2
    Explore at:
    Dataset updated
    Oct 8, 2021
    Dataset provided by
    Data Archiving and Networked Services (DANS)
    Authors
    Ahmed, S (via Mendeley Data)
    Area covered
    Kentucky
    Description

    Three crime data sources were collected and merged for this study. All three crime sources were either only reporting on the U.S. state of Kentucky (KOOL and Louisville Open Data), or filtered to only contain results for the U.S. state of Kentucky (FBI). Each data source contains unique features such as crime classifications, and unique challenges in collection and cleaning.

    The United States Federal Bureau of Investigation (FBI) issues a variety of query-able crime related data on their website. This data is sourced from law enforcement agencies across the U.S. as part of their National Incident-Based Reporting System (NIBRS) and its standards. The goal of gathering, standardizing, and providing this information is to facilitate research into crime and law enforcement patterns. The information is provided as a collection of CSV files with instructions and code for importing into a SQL database. For the purposes of this research, we utilized the the crime databases for the years 2017, 2018 and 2019, containing a total of 1,939,990 unique incidents. The NIBRS_code property denotes the type of crime as assigned by the reporting agency. The human trafficking codes are 40A (Prostitution), 40B (Assisting or Promoting Prostitution), and 370 (Pornography/Obscene Material). The drug incidents were found using codes 35A (Drug/Narcotic Violations) and 35B (Drug Equipment Violations).

    The Kentucky Department of Corrections, as a service to the public, provides an online lookup of people currently in its custody called Kentucky Offender Online Lookup (KOOL). This web application offers users tools to search for sets of inmates based on features such as name, crime date, crime name, race, and gender. The data that KOOL searches contains only people who are currently under supervision of the state of Kentucky (or should be under supervision in the case of escape).

    The Louisville Open Data Initiative (LOD) is a program from the city of Louisville, Kentucky, U.S.A. to increase the transparency of the city government and promote technological innovation. As part of LOD, a dataset of crime reports is made available online. The records contained within the LOD dataset represent any call for police service where a police incident report was generated. This does not necessarily mean a crime was committed, as an incident report can be generated before an investigation has taken place.

  6. Z

    The Fano 3-fold database

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jan 19, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kasprzyk, Alexander M. (2022). The Fano 3-fold database [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_5820337
    Explore at:
    Dataset updated
    Jan 19, 2022
    Dataset provided by
    Brown, Gavin
    Kasprzyk, Alexander M.
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    The Fano 3-fold database

    This is a dataset that relates to the graded (homogeneous coordinate) rings of possible algebraic varieties: complex Fano 3-folds with Fano index 1. Each entry in this dataset records the (anticanonical) Hilbert series of a possible Fano 3-fold (X), along with the result of some analysis about how (X) may be (anticanonically) embedded in weighted projective space (\mathbb{P}(w_1,w_2,\ldots,w_s)).

    For details, see the paper [BK22], which is a companion and update to the original paper [ABR02].

    If you make use of this data, please consider citing [BK22] and the DOI for this data:

    doi:10.5281/zenodo.5820338

    The data consists of two files in key:value format, "fano3.txt" and "matchmaker.txt". The files "fano3.sql" and "matchmaker.sql" contain the same data as the key:value files, but formatted ready for inserting in sqlite.

    fano3.txt

    This file contains data that relates to the graded (homogeneous coordinate) rings of possible algebraic varieties. For each entry, the essential characteristic data is the genus and basket; everything else follows (with the exception of the ID). Briefly, this essential data determines a power series, the Hilbert series, (\text{Hilb}(X,-K_X) = 1 + h_1t + h_2t^2 + \ldots) that can be written as a rational function of the form ((\text{polynomial numerator in $t$}) / \prod_{i=1}^s(1-t^{w_i})), where (w_1,w_2,\ldots,w_s) are positive integer weights.

    The data consists of 52646 entries. The 39550 stable entries (that is, with 'stable' equal to 'true') are assigned an ID 'id' in the range 1-39550. The 13096 unstable entries (that is, with 'stable' equal to 'false') are assigned an ID in the range 41515-54610. IDs in the range 39551-41514 are assigned to the higher index Fano varieties, and are not included in this dataset.

    Example entry id: 1 weights: 5,6,7,...,16 has_elephant: false genus: -2 h1: 0 h2: 0 ... h10: 4 numerator: t^317 - t^300 - 6*t^299 - ... + 1 codimension: 24 basket: 1/2(1,1,1),1/2(1,1,1),1/3(1,1,2),...,1/5(1,2,3) basket_size: 7 equation_degrees: 17,18,18,...,27 degree: 1/60 k3_rank: 19 bogomolov: -8/15 kawamata: 1429/60 stable: true

    (Some data truncated for readability.)

    Brief description of an entry id: a unique integer ID for this entry genus: (h^0(X,-K_X)-2) basket: multiset of quotient singularities (\frac{1}{r}(f,a,-a)) basket_size: number of elements in the 'basket' k3_rank: (\sum(r-1)) taken over the 'basket' kawamata: (\sum(r-\frac{1}{r})) taken over the 'basket' bogomolov: sum of terms over 'basket' relating to stability (see [BK22]) stable: true if and only if 'bogolomov' (\le0) degree: anticanonical degree ((-K_X)^3) of (X), determined by above data (see [BK22]) h1,h2,...,h10: coefficients of (t,t^2,\ldots,t^{10}) in the Hilbert series (\text{Hilb}(X,-K_X)) weights: suggestion of weights (w_1,w_2,\ldots,w_s) for the anticanonical embedding (X\subset\mathbb{P}(w_1,w_2,\ldots,w_s)) numerator: polynomial such that the Hilbert series (\text{Hilb}(X,-K_X)) is given by the power series expansion of (\text{'numerator'} / \prod_{i=1}^s(1-t^{w_i})), where the (w_i) in the denominator range over the 'weights' codimension: the codimension of (X) in the suggested embedding, equal to (s - 4) has_elephant: true if and only if (h_1 > 0)

    matchmaker.txt

    This file contains a set of pairs of IDs, in each case one from the canonical toric Fano classification [Kas10,toric] and one from "fano3.txt". The meaning is that the Hilbert series of the two agree, and this file contains all such agreeing pairs.

    Example entry toric_id: 1 fano3_id: 27334

    Brief description of an entry toric_id: integer ID in the range 1-674688, corresponding to an 'id' from canonical toric Fano dataset [Kas10,toric] fano3_id: an integer ID in the range 1-39550 or 41515-54610, corresponding to an 'id' from "fano3.txt"

    fano3.sql and matchmaker.sql

    The files "fano3.sql" and "matchmaker.sql" contain sqlite-formatted versions of the data described above, and can be imported into an sqlite database via, for example:

    $ cat fano3.sql matchmaker.sql | sqlite3 fano3.db

    This can then be easily queried. For example:

    $ sqlite3 fano3.db

    SELECT id FROM fano3 WHERE degree = 72 AND stable IS TRUE; 39550 SELECT toric_id FROM fano3totoricf3c WHERE fano3_id = 39550; 547334 547377

    References

    [ABR02] Selma Altinok, Gavin Brown, and Miles Reid, "Fano 3-folds, K3 surfaces and graded rings", in Topology and geometry: commemorating SISTAG, volume 314 of Contemp. Math., pages 25-53. Amer. Math. Soc., Providence, RI, 2002. [BK22] Gavin Brown and Alexander Kasprzyk, "Kawamata boundedness for Fano threefolds and the Graded Ring Database", 2022. [Kas10] Alexander Kasprzyk, "Canonical toric Fano threefolds", Canadian Journal of Mathematics, 62(6), 1293-1309, 2010. [toric] Alexander Kasprzyk, "The classification of toric canonical Fano 3-folds", Zenodo, doi:10.5281/zenodo.5866330

  7. Z

    Rediscovery Datasets: Connecting Duplicate Reports of Apache, Eclipse, and...

    • data.niaid.nih.gov
    • explore.openaire.eu
    • +1more
    Updated Aug 3, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Miranskyy, Andriy V. (2024). Rediscovery Datasets: Connecting Duplicate Reports of Apache, Eclipse, and KDE [Dataset]. https://data.niaid.nih.gov/resources?id=ZENODO_400614
    Explore at:
    Dataset updated
    Aug 3, 2024
    Dataset provided by
    Sadat, Mefta
    Miranskyy, Andriy V.
    Bener, Ayse Basar
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    We present three defect rediscovery datasets mined from Bugzilla. The datasets capture data for three groups of open source software projects: Apache, Eclipse, and KDE. The datasets contain information about approximately 914 thousands of defect reports over a period of 18 years (1999-2017) to capture the inter-relationships among duplicate defects.

    File Descriptions

    apache.csv - Apache Defect Rediscovery dataset

    eclipse.csv - Eclipse Defect Rediscovery dataset

    kde.csv - KDE Defect Rediscovery dataset

    apache.relations.csv - Inter-relations of rediscovered defects of Apache

    eclipse.relations.csv - Inter-relations of rediscovered defects of Eclipse

    kde.relations.csv - Inter-relations of rediscovered defects of KDE

    create_and_populate_neo4j_objects.cypher - Populates Neo4j graphDB by importing all the data from the CSV files. Note that you have to set dbms.import.csv.legacy_quote_escaping configuration setting to false to load the CSV files as per https://neo4j.com/docs/operations-manual/current/reference/configuration-settings/#config_dbms.import.csv.legacy_quote_escaping

    create_and_populate_mysql_objects.sql - Populates MySQL RDBMS by importing all the data from the CSV files

    rediscovery_db_mysql.zip - For your convenience, we also provide full backup of the MySQL database

    neo4j_examples.txt - Sample Neo4j queries

    mysql_examples.txt - Sample MySQL queries

    rediscovery_eclipse_6325.png - Output of Neo4j example #1

    distinct_attrs.csv - Distinct values of bug_status, resolution, priority, severity for each project

  8. Data from: myPhyloDB

    • catalog.data.gov
    • s.cnmilf.com
    • +1more
    Updated Mar 30, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Agricultural Research Service (2024). myPhyloDB [Dataset]. https://catalog.data.gov/dataset/myphylodb-c588e
    Explore at:
    Dataset updated
    Mar 30, 2024
    Dataset provided by
    Agricultural Research Servicehttps://www.ars.usda.gov/
    Description

    myPhyloDB is an open-source software package aimed at developing a user-friendly web-interface for accessing and analyzing all of your laboratory's microbial ecology data (currently supported project types: soil, air, water, microbial, and human-associated). The storage and handling capabilities of myPhyloDB archives users' raw sequencing files, and allows for easy selection of any combination of projects/samples from all of your projects using the built-in SQL database. The data processing capabilities of myPhyloDB are also flexible enough to allow the upload, storage, and analysis of pre-processed data or raw (454 or Illumina) data files using the built-in versions of Mothur and R. myPhyloDB is designed to run as a local web-server, which allows a single installation to be accessible to all of your laboratory members, regardless of their operating system or other hardware limitations. myPhyloDB includes an embedded copy of the popular Mothur program and uses a customizable batch file to perform sequence editing and processing. This allows myPhyloDB to leverage the flexibility of Mothur and allow for greater standardization of data processing and handling across all of your sequencing projects. myPhyloDB also includes an embedded copy of the R software environment for a variety of statistical analyses and graphics. Currently, myPhyloDB includes analysis for factor or regression-based ANcOVA, principal coordinates analysis (PCoA), differential abundance analysis (DESeq), and sparse partial least-squares regression (sPLS). Resources in this dataset:Resource Title: Website Pointer to myPhyloDB. File Name: Web Page, url: https://myphylodb.azurecloudgov.us/myPhyloDB/home/ Provides information and links to download latest version, release history, documentation, and tutorials including type of analysis you would like to perform (Univariate: ANCOVA/GLM; Multivariate: DiffAbund, PcoA, or sPLS).

  9. Definitions of incidence and prevalence terms.

    • plos.figshare.com
    • figshare.com
    xls
    Updated Jun 3, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    David A. Springate; Rosa Parisi; Ivan Olier; David Reeves; Evangelos Kontopantelis (2023). Definitions of incidence and prevalence terms. [Dataset]. http://doi.org/10.1371/journal.pone.0171784.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 3, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    David A. Springate; Rosa Parisi; Ivan Olier; David Reeves; Evangelos Kontopantelis
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Definitions of incidence and prevalence terms.

  10. CFM-ID Paper Data

    • epa.figshare.com
    zip
    Updated Mar 1, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    EPA's National Center for Computational Toxicology (2019). CFM-ID Paper Data [Dataset]. http://doi.org/10.23645/epacomptox.7776212.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 1, 2019
    Dataset provided by
    United States Environmental Protection Agencyhttp://www.epa.gov/
    Authors
    EPA's National Center for Computational Toxicology
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This upload is a zip containing the following files:Predicted EI-MS Spectra of CompTox Chemicals Dashboard Structures:Predicted EI-MS spectra of ~700,000 chemical structures from the CompTox Chemicals Dashboard were generated using the CFM-ID model developed by Allen, et al. (https://doi.org/10.1021/acs.analchem.6b01622). These data are provided in .dat ASCII format.Predicted MS/MS Spectra in ESI-positive mode of CompTox Chemicals Dashboard Structures:Predicted MS/MS spectra of ~700,000 chemical structures from the CompTox Chemicals Dashboard were generated using the CFM-ID model developed by Allen, et al. (https://doi.org/10.1007/s11306-014-0676-4) in ESI-positive mode. These data are provided in .dat ASCII format.Predicted MS/MS Spectra in ESI-negative mode of CompTox Chemicals Dashboard Structures:Predicted MS/MS spectra of ~700,000 chemical structures from the CompTox Chemicals Dashboard were generated using the CFM-ID model developed by Allen, et al. (https://doi.org/10.1007/s11306-014-0676-4) in ESI-negative mode. These data are provided in .dat ASCII format.Database of Predicted Spectra of CompTox Chemicals Dashboard Structures:Predicted spectra of ~700,000 chemical structures from the CompTox Chemicals Dashboard were generated using the CFM-ID model developed by Allen, et al. (https://doi.org/10.1007/s11306-014-0676-4 and https://doi.org/10.1007/s11306-014-0676-4) in ESI-positive and negative modes and EI-MS. These data are provided in an SQL relational database.Database Schema File of Predicted Spectra of CompTox Chemicals Dashboard Structures:Predicted spectra of ~700,000 chemical structures from the CompTox Chemicals Dashboard were generated using the CFM-ID model developed by Allen, et al. (https://doi.org/10.1007/s11306-014-0676-4 and https://doi.org/10.1007/s11306-014-0676-4) in ESI-positive and negative modes and EI-MS. These data are provided in an SQL relational database and described in this SQL Schema file.Chemical Metadata from the CompTox Chemicals Dashboard Linked to Predicted Spectra:Chemical metadata from the CompTox Chemicals Dashboard are linked through the unique DSSTox chemical identifier (DTXCID) enabling integration to predicted mass spectral data. Chemical metadata includes CASRN, molecular formula, SMILES, presence in lists, and data source occurrences.Chemical Structures that failed during mass spectral prediction:Predicted spectra of ~700,000 chemical structures from the CompTox Chemicals Dashboard were generated using the CFM-ID model developed by Allen, et al. (https://doi.org/10.1007/s11306-014-0676-4 and https://doi.org/10.1007/s11306-014-0676-4) in ESI-positive and negative modes and EI-MS. Due to structural errors and model constraints, the prediction of all MS modes failed for 56 structures.

  11. d

    Inventory of hazardous substance concentrations in different environmental...

    • b2find.dkrz.de
    Updated May 20, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Inventory of hazardous substance concentrations in different environmental compartments in the Danube river basin - Dataset - B2FIND [Dataset]. https://b2find.dkrz.de/dataset/010673ea-aa02-57ab-8a6c-d6284ee61f76
    Explore at:
    Dataset updated
    May 20, 2024
    Area covered
    Danube River
    Description

    Description of the dataset The data set contains an SQL-dump of a PostgreSQL data base. This data base contains concentrations of hazardous substances and other water quality parameters in different environmental compartments: river water (water and suspended sediments) ground water waste water (treated and untreated) and sewage sludge storm water runoff from combined and separate sewer systems atmospheric deposition soil Data from many different data sources were collected, cheked and combined and meta data were harmonized to allow for a combined data evaluation. Technical specifications The SQL-file was exported from a PostgreSQL 15.2 data base and compressed using 7zip into a zip-file (dhm3cinventoryV2.zip). Text-encoding is UTF-8. Supplementary files A short documentation (documentation_inventory_db_V2.0.pdf) and a listof known issues with the data which could not be resolved before publication (List_of_known_issues_V2.0.pdf) are enclosed as PDF files. Versions Version 2.0.1 During creation of Version 1.0.0 and 2.0.0 there was an processing error in the coordinates of atmospheric deposition and soil sampling sites leading to identical values for latitude and longitude for some sites. This was fixed in version 2.0.1. Version 2.0.0 This Version 2.0.0 of the database contains more data as for further data sets a publication agreement was reached and some data were reimported to resolve some errors created during data preparation for import. The database structure was extended and corrected at different points, leading to an improved data model.

  12. Z

    BioTIME

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jan 21, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    BioTIME Consortium (2020). BioTIME [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_825783
    Explore at:
    Dataset updated
    Jan 21, 2020
    Dataset authored and provided by
    BioTIME Consortium
    Description

    The BioTIME database contains raw data on species identities and abundances in ecological assemblages through time. The database consists of 11 tables; one raw data table plus ten related meta data tables. For further information please see our associated data paper.

    This data consists of several elements:

    BioTIMESQL_11_07_2017.sql - an SQL file for the full public version of BioTIME which can be imported into any mySQL database.

    BioTIMEQuery_06_07_2017.csv - data file, although too large to view in Excel, this can be read into several software applications such as R or various database packages.

    BioTIMEMetadata_11_07_2017.csv - file containing the meta data for all studies.

    BioTIMECitations_11_07_2017.csv - file containing the citation list for all studies.

    InteractingWithBioTIME.html - a brief tutorial on using BioTIME in the form of an r markdown HTML page.

    Please note: any users of any of this material should cite the associated data paper in addition to the DOI listed here.

  13. Z

    The dataset of the Global Collections survey of natural history collections

    • data.niaid.nih.gov
    Updated Nov 6, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Drew, Nicholas (2022). The dataset of the Global Collections survey of natural history collections [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_6985398
    Explore at:
    Dataset updated
    Nov 6, 2022
    Dataset provided by
    Vincent, Sarah
    Drew, Nicholas
    Corrigan, Robert J.
    Smith, Vincent S.
    Woodburn, Matt
    Meyer, Cailin
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    From 2016 to 2018, we surveyed the world’s largest natural history museum collections to begin mapping this globally distributed scientific infrastructure. The resulting dataset includes 73 institutions across the globe. It has:

    Basic institution data for the 73 contributing institutions, including estimated total collection sizes, geographic locations (to the city) and latitude/longitude, and Research Organization Registry (ROR) identifiers where available.

    Resourcing information, covering the numbers of research, collections and volunteer staff in each institution.

    Indicators of the presence and size of collections within each institution broken down into a grid of 19 collection disciplines and 16 geographic regions.

    Measures of the depth and breadth of individual researcher experience across the same disciplines and geographic regions.

    This dataset contains the data (raw and processed) collected for the survey, and specifications for the schema used to store the data. It includes:

    A diagram of the MySQL database schema.

    A SQL dump of the MySQL database schema, excluding the data.

    A SQL dump of the MySQL database schema with all data. This may be imported into an instance of MySQL Server to create a complete reconstruction of the database.

    Raw data from each database table in CSV format.

    A set of more human-readable views of the data in CSV format. These correspond to the database tables, but foreign keys are substituted for values from the linked tables to make the data easier to read and analyse.

    A text file containing the definitions of the size categories used in the collection_unit table.

    The global collections data may also be accessed at https://rebrand.ly/global-collections. This is a preliminary dashboard, constructed and published using Microsoft Power BI, that enables the exploration of the data through a set of visualisations and filters. The dashboard consists of three pages:

    Institutional profile: Enables the selection of a specific institution and provides summary information on the institution and its location, staffing, total collection size, collection breakdown and researcher expertise.

    Overall heatmap: Supports an interactive exploration of the global picture, including a heatmap of collection distribution across the discipline and geographic categories, and visualisations that demonstrate the relative breadth of collections across institutions and correlations between collection size and breadth. Various filters allow the focus to be refined to specific regions and collection sizes.

    Browse: Provides some alternative methods of filtering and visualising the global dataset to look at patterns in the distribution and size of different types of collections across the global view.

  14. a

    Wells - Municipal - Platte River Basin (2006)

    • hub.arcgis.com
    • data.geospatialhub.org
    • +1more
    Updated Apr 23, 2018
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    wrds_wdo (2018). Wells - Municipal - Platte River Basin (2006) [Dataset]. https://hub.arcgis.com/documents/e39f45329e2a4fb3afa15034e459a3d4
    Explore at:
    Dataset updated
    Apr 23, 2018
    Dataset authored and provided by
    wrds_wdo
    Area covered
    Platte River
    Description

    Contains approximately 38,447 point locations of Wyoming well permit locations on file with the Wyoming State Engineer's Office. The wells have been located to the to the nearest 40 acre parcel. All locational information and attributes were imported from the Wyoming State Engineer's Office Well Permits Database stored in SQL-Server

  15. a

    Industrial Wells - Wind/Bighorn River Basins (2010)

    • hub.arcgis.com
    • data.geospatialhub.org
    Updated Apr 23, 2018
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    wrds_wdo (2018). Industrial Wells - Wind/Bighorn River Basins (2010) [Dataset]. https://hub.arcgis.com/documents/3d07f8cdf2c543e4bec999f37d392a97
    Explore at:
    Dataset updated
    Apr 23, 2018
    Dataset authored and provided by
    wrds_wdo
    Area covered
    Bighorn River
    Description

    Point locations of Wyoming well permit locations on file with the Wyoming State Engineer's Office. All locational information and attributes were imported from the Wyoming State Engineer's Office Well Permits Database stored in SQL-Server

  16. Z

    Data from: Open-data release of aggregated Australian school-level...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Monteiro Lobato, (2020). Open-data release of aggregated Australian school-level information. Edition 2016.1 [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_46086
    Explore at:
    Dataset updated
    Jan 24, 2020
    Dataset authored and provided by
    Monteiro Lobato,
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Area covered
    Australia
    Description

    The file set is a freely downloadable aggregation of information about Australian schools. The individual files represent a series of tables which, when considered together, form a relational database. The records cover the years 2008-2014 and include information on approximately 9500 primary and secondary school main-campuses and around 500 subcampuses. The records all relate to school-level data; no data about individuals is included. All the information has previously been published and is publicly available but it has not previously been released as a documented, useful aggregation. The information includes: (a) the names of schools (b) staffing levels, including full-time and part-time teaching and non-teaching staff (c) student enrolments, including the number of boys and girls (d) school financial information, including Commonwealth government, state government, and private funding (e) test data, potentially for school years 3, 5, 7 and 9, relating to an Australian national testing programme know by the trademark 'NAPLAN'

    Documentation of this Edition 2016.1 is incomplete but the organization of the data should be readily understandable to most people. If you are a researcher, the simplest way to study the data is to make use of the SQLite3 database called 'school-data-2016-1.db'. If you are unsure how to use an SQLite database, ask a guru.

    The database was constructed directly from the other included files by running the following command at a command-line prompt: sqlite3 school-data-2016-1.db < school-data-2016-1.sql Note that a few, non-consequential, errors will be reported if you run this command yourself. The reason for the errors is that the SQLite database is created by importing a series of '.csv' files. Each of the .csv files contains a header line with the names of the variable relevant to each column. The information is useful for many statistical packages but it is not what SQLite expects, so it complains about the header. Despite the complaint, the database will be created correctly.

    Briefly, the data are organized as follows. (a) The .csv files ('comma separated values') do not actually use a comma as the field delimiter. Instead, the vertical bar character '|' (ASCII Octal 174 Decimal 124 Hex 7C) is used. If you read the .csv files using Microsoft Excel, Open Office, or Libre Office, you will need to set the field-separator to be '|'. Check your software documentation to understand how to do this. (b) Each school-related record is indexed by an identifer called 'ageid'. The ageid uniquely identifies each school and consequently serves as the appropriate variable for JOIN-ing records in different data files. For example, the first school-related record after the header line in file 'students-headed-bar.csv' shows the ageid of the school as 40000. The relevant school name can be found by looking in the file 'ageidtoname-headed-bar.csv' to discover that the the ageid of 40000 corresponds to a school called 'Corpus Christi Catholic School'. (3) In addition to the variable 'ageid' each record is also identified by one or two 'year' variables. The most important purpose of a year identifier will be to indicate the year that is relevant to the record. For example, if one turn again to file 'students-headed-bar.csv', one sees that the first seven school-related records after the header line all relate to the school Corpus Christi Catholic School with ageid of 40000. The variable that identifies the important differences between these seven records is the variable 'studentyear'. 'studentyear' shows the year to which the student data refer. One can see, for example, that in 2008, there were a total of 410 students enrolled, of whom 185 were girls and 225 were boys (look at the variable names in the header line). (4) The variables relating to years are given different names in each of the different files ('studentsyear' in the file 'students-headed-bar.csv', 'financesummaryyear' in the file 'financesummary-headed-bar.csv'). Despite the different names, the year variables provide the second-level means for joining information acrosss files. For example, if you wanted to relate the enrolments at a school in each year to its financial state, you might wish to JOIN records using 'ageid' in the two files and, secondarily, matching 'studentsyear' with 'financialsummaryyear'. (5) The manipulation of the data is most readily done using the SQL language with the SQLite database but it can also be done in a variety of statistical packages. (6) It is our intention for Edition 2016-2 to create large 'flat' files suitable for use by non-researchers who want to view the data with spreadsheet software. The disadvantage of such 'flat' files is that they contain vast amounts of redundant information and might not display the data in the form that the user most wants it. (7) Geocoding of the schools is not available in this edition. (8) Some files, such as 'sector-headed-bar.csv' are not used in the creation of the database but are provided as a convenience for researchers who might wish to recode some of the data to remove redundancy. (9) A detailed example of a suitable SQLite query can be found in the file 'school-data-sqlite-example.sql'. The same query, used in the context of analyses done with the excellent, freely available R statistical package (http://www.r-project.org) can be seen in the file 'school-data-with-sqlite.R'.

  17. BioTIME

    • zenodo.org
    • data.niaid.nih.gov
    bin, csv
    Updated Jun 24, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    BioTIME Consortium; BioTIME Consortium (2021). BioTIME [Dataset]. http://doi.org/10.5281/zenodo.2582840
    Explore at:
    csv, binAvailable download formats
    Dataset updated
    Jun 24, 2021
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    BioTIME Consortium; BioTIME Consortium
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The BioTIME database contains raw data on species identities and abundances in ecological assemblages through time. The database consists of 11 tables; one raw data table plus ten related meta data tables. For further information please see our associated data paper.

    This data consists of several elements:

    • BioTIMESQL_02_04_2018.sql - an SQL file for the full public version of BioTIME which can be imported into any mySQL database.
    • BioTIMEQuery_02_04_2018.csv - data file, although too large to view in Excel, this can be read into several software applications such as R or various database packages.
    • BioTIMEMetadata_02_04_2018.csv - file containing the meta data for all studies.
    • BioTIMECitations_02_04_2018.csv - file containing the citation list for all studies.
    • BioTIMECitations_02_04_2018.xlsx - file containing the citation list for all studies (some special characters are not supported in the csv format).
    • BioTIMEInteractions_02_04_2018.Rmd - an r markdown page providing a brief overview of how to interact with the database and associated .csv files (this will not work until field paths and database connections have been added/updated).

    Please note: any users of any of this material should cite the associated data paper in addition to the DOI listed here.

    To cite the data paper use the following:

    Dornelas M, Antão LH, Moyes F, Bates, AE, Magurran, AE, et al. BioTIME: A database of biodiversity time series for the Anthropocene. Global Ecol Biogeogr. 2018; 27:760 - 786. https://doi.org/10.1111/geb.12729

  18. S

    Nigerian Maize and Rice Seed Imports 2008-2015 - Dataset

    • splitgraph.com
    • catalog.data.gov
    • +1more
    Updated Mar 21, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    usaid-gov (2019). Nigerian Maize and Rice Seed Imports 2008-2015 - Dataset [Dataset]. https://www.splitgraph.com/usaid-gov/nigerian-maize-and-rice-seed-imports-20082015-h3n7-vkhf
    Explore at:
    application/vnd.splitgraph.image, json, application/openapi+jsonAvailable download formats
    Dataset updated
    Mar 21, 2019
    Authors
    usaid-gov
    Area covered
    Nigeria
    Description

    The U.S. Borlaug Fellows in Global Food Security program is funded by the United States Agency for International Development (USAID) to expand the pool of U.S. food security professionals who have the scientific base needed to effectively study and manage the global landscapes in support of sustainable food systems. The intended objectives of the U.S. Borlaug Fellows in Global Food Security program are: a) To help train a new generation of interdisciplinary U.S. scientists with fluency in global food security and the skills to strengthen the capacity of developing countries to apply new innovations and technologies, b) To support the key research themes of the Feed the Future initiative and increase understanding of the links between agricultural production, nutritional status, natural resource conservation, and development, c) To foster cross-cultural understanding and dialog.

    These data show the quantities of imported maize and rice seed imported into Nigeria from the relevant source countries for the period 2008 to 2015. The data source is the Nigerian Customs Service.

    Splitgraph serves as an HTTP API that lets you run SQL queries directly on this data to power Web applications. For example:

    See the Splitgraph documentation for more information.

  19. a

    Chatham County - Soils

    • opendata-chathamncgis.opendata.arcgis.com
    • hub.arcgis.com
    Updated May 14, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Chatham County GIS Portal (2018). Chatham County - Soils [Dataset]. https://opendata-chathamncgis.opendata.arcgis.com/datasets/chatham-county-soils
    Explore at:
    Dataset updated
    May 14, 2018
    Dataset authored and provided by
    Chatham County GIS Portal
    Area covered
    Description

    The published polygons representing the extent of soil classifications or types. A subset of soils data from the Natural Resources Conservation Service SSURGO data. Imported into ChathamGIS SQL database in July 2015.Chatham GIS SOP: "MAPSERV-112"

  20. Data from: SPACK : spatio-temporal database dedicated to whaling, sealing...

    • zenodo.org
    • data.niaid.nih.gov
    pdf, zip
    Updated Jun 21, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bocher Elsa; Bocher Erwan; Bocher Erwan; Bocher Elsa (2024). SPACK : spatio-temporal database dedicated to whaling, sealing and fishing in Saint-Paul, Amsterdam, Crozet and Kerguelen Islands between 1780's and 1930's [Dataset]. http://doi.org/10.5281/zenodo.12207508
    Explore at:
    pdf, zipAvailable download formats
    Dataset updated
    Jun 21, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Bocher Elsa; Bocher Erwan; Bocher Erwan; Bocher Elsa
    Area covered
    Kerguelen Islands
    Description

    SPACK is a spatio-temporal database dedicated to whaling, sealing and fishing history. It aims to gather miscellaneous and scattered sources about whaling, sealing and fishing voyages that visited Saint-Paul, Amsterdam, Crozet and Kerguelen Islands between 1780’s and 1930’s.

    SPACK has been defined and populated during a PhD thesis in history. The main purpose is to assess the attendance of whaling, sealing and fishing ships around the French Southern Islands from the late 18th century. The goal is also to shed light on the issues arising from the first public policies for managing natural resources once French sovereignty was affirmed in the late 19th.

    The data collected in SPACK are stored in the object-relational database, PostgreSQL, plus its spatial extension PostGIS. This repository can be used to create a new instance of the SPACK database. It contains 7 SQL files that represent the main tables of the SPACK model.

    - attested_presence_areas: this table shows the dates on which the vessel is present in the area.

    - code_areas: this table indicates the codes used to identify each covered area.

    - code_sealing_gangs: this table shows the code used to indicate when a gang of hunters has been dropped off or relieved on shore by the ship.

    - natural_resources: this table provides the codes used to classify vessel activity by 'area'.

    - shipment_origin: this table lists the codes for the main shipowner's geographical origin.

    - stop_over_voyages: this table describes the date of arrival and departure by 'area'. It also indicates the degree of interpolation of the data, month or day.

    - voyages_areas: this table contains a list of vessels involved in whaling, fishing and sealing activities that crossed Saint-Paul, Amsterdam, Crozet or/and Kerguelen islands. It provides information such as vessel name, rig type, tonnage, port, shipment origin, natural resource exploited, agent, dates of presence, primary and secondary sources.

    The main entity of this database is a ship attached to a voyage and a geographical area. This entity is described by a set of properties: ship’s and master’s names, geographical origin, shipowner, port, arrival and departure dates. Those data are featured in the voyages_areas table. The database also provides other helpful information, such as the dates of attendance on the island, the type of natural resource exploited and the sources used to identify a voyage.

    The SPACK database takes profit from the Whaling History Database (https://whalinghistory.org/). It does not contain any data imported from WHDB, but it is still possible to link the two sources. Indeed, the voyages_areas table stores the identifier used by the WHDB to describe each voyage.

    The WHDB provides the vessel's location in lat/lon for several voyages. Those locations have been processed to populate the voyages_areas table and check when a voyage crossed a study area: Saint-Paul, Amsterdam, Crozet or Kerguelen Islands. However, no spatial information is saved in the SQL files. You can contact the authors if you want more information about the spatial analysis techniques used.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Seair Exim, Seair Exim Solutions [Dataset]. https://www.seair.co.in

Seair Exim Solutions

Seair Info Solutions PVT LTD

Sql Server Import Data of HS Code 84715000 India – Seair.co.in

Explore at:
19 scholarly articles cite this dataset (View in Google Scholar)
.bin, .xml, .csv, .xlsAvailable download formats
Dataset provided by
Seair Info Solutions PVT LTD
Authors
Seair Exim
Area covered
India
Description

Subscribers can find out export and import data of 23 countries by HS code or product’s name. This demo is helpful for market analysis.

Search
Clear search
Close search
Google apps
Main menu