24 datasets found
  1. w

    APRS World Database

    • data.wu.ac.at
    Updated Oct 10, 2013
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Global (2013). APRS World Database [Dataset]. https://data.wu.ac.at/odso/datahub_io/OGUzMTBhYzgtMjg5Yi00ZGRiLWI2Y2YtNmI3MmYwMTg5NjBk
    Explore at:
    Dataset updated
    Oct 10, 2013
    Dataset provided by
    Global
    Description

    [begin excerpt from Integrating the Aprsworld Database Into Your Application]

    The aprsworld.net project was started in March 2001 by James Jefferson Jarvis, KB0THN. The goal from the beginning has ben to parse the APRS internet stream into data that can be stored in a relational database system.

    As the time of writing (September 2003) about 1 million raw APRS packets traverse the internet stream each day. Each one of these packets is parsed and inserted into the appropriate table of the aprsworld.net database. These results in about 5 million inserts a day, with an average of about 60 inserts / queries per second. The database grows by about 6 gigabytes per month.

    By using the aprsworld.net database you can save the trouble of collecting, parsing, and storing this large ammount of data. Simple operations like finding the last position of a APRS station are extremely easy - and more complex dataminning operations are possible with minimum effort.

    [end excerpt]

    aprsworld-to-XML Interface

    This script provides an XML interface to aprsworld.net, so you don't need to have direct access to the aprsworld or findu databases, or know SQL, in order to get generalized and standardly formatted APRS data directly from the Internet into your application. Free code libraries for parsing XML are easy to find for almost any programming environment.

    As new minor versions of this script are made available, they will reside in their own directory containing the version number, so you can safely link to a script without future upgrade changes affecting anything. (bugfix-level versions will not have their own directory)

  2. PLBD (Protein Ligand Binding Database) table description XML file

    • zenodo.org
    • data.niaid.nih.gov
    zip
    Updated Dec 26, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Darius Lingė; Darius Lingė; Marius Gedgaudas; Marius Gedgaudas; Andrius Merkys; Andrius Merkys; Vytautas Petrauskas; Vytautas Petrauskas; Antanas Vaitkus; Antanas Vaitkus; Algirdas Grybauskas; Algirdas Grybauskas; Vaida Paketurytė; Vaida Paketurytė; Asta Zubrienė; Asta Zubrienė; Audrius Zakšauskas; Audrius Zakšauskas; Aurelija Mickevičiūtė; Aurelija Mickevičiūtė; Joana Smirnovienė; Joana Smirnovienė; Lina Baranauskienė; Lina Baranauskienė; Edita Čapkauskaitė; Edita Čapkauskaitė; Virginija Dudutienė; Virginija Dudutienė; Ernestas Urniežius; Ernestas Urniežius; Aleksandras Konovalovas; Egidijus Kazlauskas; Egidijus Kazlauskas; Saulius Gražulis; Saulius Gražulis; Daumantas Matulis; Daumantas Matulis; Aleksandras Konovalovas (2022). PLBD (Protein Ligand Binding Database) table description XML file [Dataset]. http://doi.org/10.5281/zenodo.7482008
    Explore at:
    zipAvailable download formats
    Dataset updated
    Dec 26, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Darius Lingė; Darius Lingė; Marius Gedgaudas; Marius Gedgaudas; Andrius Merkys; Andrius Merkys; Vytautas Petrauskas; Vytautas Petrauskas; Antanas Vaitkus; Antanas Vaitkus; Algirdas Grybauskas; Algirdas Grybauskas; Vaida Paketurytė; Vaida Paketurytė; Asta Zubrienė; Asta Zubrienė; Audrius Zakšauskas; Audrius Zakšauskas; Aurelija Mickevičiūtė; Aurelija Mickevičiūtė; Joana Smirnovienė; Joana Smirnovienė; Lina Baranauskienė; Lina Baranauskienė; Edita Čapkauskaitė; Edita Čapkauskaitė; Virginija Dudutienė; Virginija Dudutienė; Ernestas Urniežius; Ernestas Urniežius; Aleksandras Konovalovas; Egidijus Kazlauskas; Egidijus Kazlauskas; Saulius Gražulis; Saulius Gražulis; Daumantas Matulis; Daumantas Matulis; Aleksandras Konovalovas
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    PLBD (Protein Ligand Binding Database) table description XML file
    =================================================================

    General
    -------

    The provided ZIP archive contains an XML file "main-database-description.xml" with the description of all tables (VIEWS) that are exposed publicly at the PLBD server (https://plbd.org/). In the XML file, all columns of the visible tables are described, specifying their SQL types, measurement units, semantics, calculation formulae, SQL statements that can be used to generate values in these columns, and publications of the formulae derivations.

    The XML file conforms to the published XSD schema created for descriptions of relational databases for specifications of scientific measurement data. The XSD schema ("relational-database_v2.0.0-rc.18.xsd") and all included sub-schemas are provided in the same archive for convenience. All XSD schemas are validated against the "XMLSchema.xsd" schema from the W3C consortium.

    The ZIP file contains the excerpt from the files hosted in the https://plbd.org/ at the moment of submission of the PLBD database in the Scientific Data journal, and is provided to conform the journal policies. The current data and schemas should be fetched from the published URIs:

    https://plbd.org/
    https://plbd.org/doc/db/schemas
    https://plbd.org/doc/xml/schemas

    Software that is used to generate SQL schemas, RestfulDB metadata and the RestfulDB middleware that allows to publish the databases generated from the XML description on the Web are available at public Subversion repositories:

    svn://www.crystallography.net/solsa-database-scripts
    svn://saulius-grazulis.lt/restfuldb

    Usage
    -----

    The unpacked ZIP file will create the "db/" directory with the tree layout given below. In addition to the database description file "main-database-description.xml", all XSD schemas necessary for validation of the XML file are provided. On a GNU/Linux operating system with a GNU Make package installed, the XML file validity can be checked by unpacking the ZIP file, entering the unpacked directory, and running 'make distclean; make'. For example, on a Linux Mint distribution, the following commands should work:

    unzip main-database-description.zip
    cd db/release/v0.10.0/tables/
    sh -x dependencies/Linuxmint-20.1/install.sh
    make distclean
    make

    If necessary, additional packages can be installed using the 'install.sh' script in the 'dependencies/' subdirectory corresponding to your operating system. As of the moment of writing, Debian-10 and Linuxmint-20.1 OSes are supported out of the box; similar OSes might work with the same 'install.sh' scripts. The installation scripts require to run package installation command under system administrator privileges, but they use *only* the standard system package manager, thus they should not put your system at risk. For validation and syntax checking, the 'rxp' and 'xmllint' programs are used.

    The log files provided in the "outputs/validation" subdirectory contain validation logs obtained on the system where the XML files were last checked and should indicate validity of the provided XML file against the references schemas.

    Layout of the archived file tree
    --------------------------------

    db/
    └── release
    └── v0.10.0
    └── tables
    ├── Makeconfig-validate-xml
    ├── Makefile
    ├── Makelocal-validate-xml
    ├── dependencies
    ├── main-database-description.xml
    ├── outputs
    └── schema

  3. c

    Data from The Lung Image Database Consortium (LIDC) and Image Database...

    • cancerimagingarchive.net
    • dev.cancerimagingarchive.net
    dicom, n/a, xls, xlsx +1
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Cancer Imaging Archive, Data from The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI): A completed reference database of lung nodules on CT scans [Dataset]. http://doi.org/10.7937/K9/TCIA.2015.LO9QL9SX
    Explore at:
    xlsx, xls, n/a, xml and zip, dicomAvailable download formats
    Dataset authored and provided by
    The Cancer Imaging Archive
    License

    https://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/https://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/

    Time period covered
    Sep 21, 2020
    Dataset funded by
    National Cancer Institutehttp://www.cancer.gov/
    Description

    The Lung Image Database Consortium image collection (LIDC-IDRI) consists of diagnostic and lung cancer screening thoracic computed tomography (CT) scans with marked-up annotated lesions. It is a web-accessible international resource for development, training, and evaluation of computer-assisted diagnostic (CAD) methods for lung cancer detection and diagnosis. Initiated by the National Cancer Institute (NCI), further advanced by the Foundation for the National Institutes of Health (FNIH), and accompanied by the Food and Drug Administration (FDA) through active participation, this public-private partnership demonstrates the success of a consortium founded on a consensus-based process.

    Seven academic centers and eight medical imaging companies collaborated to create this data set which contains 1018 cases. Each subject includes images from a clinical thoracic CT scan and an associated XML file that records the results of a two-phase image annotation process performed by four experienced thoracic radiologists. In the initial blinded-read phase, each radiologist independently reviewed each CT scan and marked lesions belonging to one of three categories ("nodule > or =3 mm," "nodule <3 mm," and "non-nodule > or =3 mm"). In the subsequent unblinded-read phase, each radiologist independently reviewed their own marks along with the anonymized marks of the three other radiologists to render a final opinion. The goal of this process was to identify as completely as possible all lung nodules in each CT scan without requiring forced consensus.

    Note : The TCIA team strongly encourages users to review pylidc and the Standardized representation of the TCIA LIDC-IDRI annotations using DICOM (DICOM-LIDC-IDRI-Nodules) of the annotations/segmentations included in this dataset before developing custom tools to analyze the XML version.

  4. t

    Data from: The DBLP Computer Science Bibliography

    • datahub.io
    • data.wu.ac.at
    dtd, gz:xml
    Updated Oct 10, 2013
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bibliographic Data (2013). The DBLP Computer Science Bibliography [Dataset]. https://datahub.io/dataset/14ced4d9-3dbc-41ad-9cc0-ed430fa8f8ca
    Explore at:
    dtd(9133), gz:xml(132000000)Available download formats
    Dataset updated
    Oct 10, 2013
    Dataset provided by
    Bibliographic Data
    Description

    The DBLP computer science bibliography contains the metadata of over 1.8 million publications, written by over 1 million authors in several thousands of journals or conference proceedings series.

    Although DBLP started with a focus on database systems and logic programming (hence the acronym), it has grown to cover all disciplines of computer science.

    Data

    Resources list the full dump of the DBLP XML records (see http://dblp.uni-trier.de/xml/ - a simple DTD is available.

    The paper "DBLP - Some Lessons Learned" documents technical details of this XML file. In the appendix "DBLP XML Requests" you may find the description of a primitive DBLP API.

    Openness: OPEN

    As of 2011-12-09 this data is open (relased under ODC-By). See the license information in the Readme.txt and the announce post: http://openbiblio.net/2011/12/09/dblp-releases-its-1-8-million-bibliographic-records-as-open-data/

  5. Data on Baltimore restaurants

    • figshare.com
    txt
    Updated Jan 20, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Xuan hong Ong (2016). Data on Baltimore restaurants [Dataset]. http://doi.org/10.6084/m9.figshare.1477010.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jan 20, 2016
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Xuan hong Ong
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Baltimore
    Description

    Read the XML data on Baltimore restaurants from here: https://d396qusza40orc.cloudfront.net/getdata%2Fdata%2Frestaurants.xml

  6. o

    A case study for the implementation of an integrated variable speed limit...

    • explore.openaire.eu
    • zenodo.org
    • +1more
    Updated Jan 25, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hang Gao; Michael Zhang (2021). A case study for the implementation of an integrated variable speed limit (VSL) control strategy in a freeway section of I-80 based on SUMO simulations [Dataset]. http://doi.org/10.25338/b8qd04
    Explore at:
    Dataset updated
    Jan 25, 2021
    Authors
    Hang Gao; Michael Zhang
    Area covered
    Interstate 80, Speed limit
    Description

    The input data consists of the corridor structure and the traffic demand data: The traffic demand data is obtained based on the public database from PeMS. Details can be found in http://pems.dot.ca.gov/. The road network is constructed and modified by netedit. A 10-mile-long freeway section of Interstate-80 Eastbound, with 6 junctions across the city of Davis, CA is selected to evaluate our VSL control strategies. This section has a series of recurrent bottlenecks and severe congestion occurs almost every day in the afternoon peak hours. These multiple bottlenecks are all “critical” along the path. Junction 70 is interconnected with SR-113, another freeway from the north. It introduces heavy merging traffic without metering. A vast lane drop from 6 to 3 lanes exists between Junction 71 and 72. With saturated mainline flow and extra ramp demand at Junction 75 and 78, the downstream traffic flow is sensitive to breakdown even with ramp metering activated in peak hours. Details can be found in https://sumo.dlr.de/docs/netedit.html. The output data is generated through the SUMO simulation. In this simulation, traffic control interface(TraCI) uses a TCP based client/server architecture to build connection with sumo, which is accessible to retrieve values of vehicles and detectors and then construct the VSL control models to get simulation results analysis. Details can be found in https://sumo.dlr.de/docs/TraCI.html. Input Data: vsl_I-80.net.xml: Definition of the 10-mile-long freeway section of Interstate-80 Eastbound network file connecting the city of Davis and West Sacrameto in California vsl_I-80.additionals.xml: Definition of induction loop detectors to capture the vehicle data in every simulation step vsl_I-80.flow.xml: Definition of 5 hours' traffic demand data(OD pairs) with three typical demand sets(light, medium and heavy) vsl_I-80.rou.xml: Vehicle routes and trip information using shortest path computation via duarouter function vsl_I-80.sumocfg.xml: Configuration file glues input files and makes it executable by SUMO Output Data: emissions_no_vsl.xml: The output which contains aggregated travel time, fuel consumption and pollutants without control strategy emissions_static.xml: Output based on the flow-based control strategy emissions_lqr.xml: Output based on the density-based LQR control strategy This project aims at reducing fuel consumption and greenhouse gas emission by applying variable speed control (VSL) strategies to the traffic corridors with multi-segment and multi-bottleneck. The dataset is composed of inputs and outputs of the SUMO simulation model via TraCI API. SUMO is a microscopic traffic simulation platform which allows to simulate a given traffic demand through a given network. Namely, the inputs consist of the vehicle trip data obtained from the PeMS database in a 10-mile long freeway section of Interstate-80 Eastbound where the outputs are simulation results of the aggregated average travel time, fuel consumption and carbon emissions under different VSL strategies with different optimal speed limit.

  7. Z

    Data from: Investigating automated bird detection from webcams using machine...

    • data.niaid.nih.gov
    • explore.openaire.eu
    Updated Sep 30, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alex Mirugwe (2024). Investigating automated bird detection from webcams using machine learning [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_5172213
    Explore at:
    Dataset updated
    Sep 30, 2024
    Dataset provided by
    Alex Mirugwe
    Emmanuel Dufourq
    Juwa Nyirenda
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    We provide a dataset of images(.jpeg) with their corresponding annotations files(.xml) used to train a bird detection deep learning model. These images were collected from the live stream feeds of Cornell Lab of Ornithology (https://www.allaboutbirds.org/cams/) situated in 6 unique locations around the world as follows:

    Treman bird feeding garden at the Cornell Ornithology Laboratory in Ithaca, New York. At this station, Axis P11448-LE cameras are used to capture the recordings from feeders perched on the edge of both Sapsucker Woods and its 10-acre ponds. This site mainly attracts forest species like chickadees (Poecile atricapillus), red-winged blackbirds (Agelaius phoeniceus), and woodpeckers (Picidae). A total of 2065 images were captured from this location.

    Fort Davis in Western Texas, USA. At this site, a total of 30 hummingbird feeder cams are hosted at an elevation of over 5500 feet. From this site, 1440 images were captured.

    Sachatamia Lodge in Mindo, Ecuador. This site has a live hummingbird feed watcher that attracts over 132 species of hummingbirds including: Fawn-breasted Brilliant, White-necked Jacobin, Purple-bibbed Whitetip, Violet-tailed Sylph, Velvet-purple Coronet, and many others. A total of 2063 images were captured from this location.

    Morris County, New Jersey, USA. Feeders at this location attract over 39 species including Red-bellied Woodpecker, Red-winged Blackbird, Purple Finch, Blue Jay, Pine Siskin, Hairy Woodpecker, and others. Footage at this site is captured by an Axis P1448-LE Camera and Axis T8351 Microphone. A total of 1876 images were recorded from this site.

    Canopy Lodge in El Valle de Anton, Panama. Over 158 bird species visit this location annually and these include Gray-headed Chachalaca, Ruddy Ground-Dove, White-tipped Dove, Green Hermit, and others. A total of 1600 images were captured.

    Southeast tip of South Island, New Zealand. At this site, nearly 10000 seabirds visit this location annually and a total of 1548 images were captured.

    The Cornell Lab of Ornithology is an institute dedicated to biodiversity conversation with the main focus on birds through research, citizen science, and education. The autoscreen software was used to capture the images from the live feeds and images of approximately 1 Megapixel (Joint Photographic Experts Group) JPEG-coloured images of resolution 1366 X 768 X 3 pixels were collected (https://sourceforge.net/projects/autoscreen/). The software took a new image every 30 seconds and was captured during different times of the day in order to avoid a sample-biased dataset. In total, 10592 images were collected for this study.

    Files provided

    Train.zip – contains 6779 image files(.jpeg) and 6779 annotation files (.xml)

    Validation.zip – contains 1695 image files(.jpeg) and 1695 annotation files (.xml)

    Test.zip –contains 2118 image files(.jpeg)

    Scripts.zip - Contains scripts needed in manipulating the dataset like dataset partitioning, and creation of CSV and tfrecords files.

    This dataset was used in the MSc thesis titled “Investigating automated bird detection from webcams using machine learning” by Alex Mirugwe, University of Cape Town – South Africa.

  8. c

    A Large-Scale CT and PET/CT Dataset for Lung Cancer Diagnosis

    • cancerimagingarchive.net
    dicom, n/a, xlsx, xml
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Cancer Imaging Archive, A Large-Scale CT and PET/CT Dataset for Lung Cancer Diagnosis [Dataset]. http://doi.org/10.7937/TCIA.2020.NNC2-0461
    Explore at:
    xml, n/a, xlsx, dicomAvailable download formats
    Dataset authored and provided by
    The Cancer Imaging Archive
    License

    https://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/https://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/

    Time period covered
    Dec 22, 2020
    Dataset funded by
    National Cancer Institutehttp://www.cancer.gov/
    Description

    This dataset consists of CT and PET-CT DICOM images of lung cancer subjects with XML Annotation files that indicate tumor location with bounding boxes. The images were retrospectively acquired from patients with suspicion of lung cancer, and who underwent standard-of-care lung biopsy and PET/CT. Subjects were grouped according to a tissue histopathological diagnosis. Patients with Names/IDs containing the letter 'A' were diagnosed with Adenocarcinoma, 'B' with Small Cell Carcinoma, 'E' with Large Cell Carcinoma, and 'G' with Squamous Cell Carcinoma.

    The images were analyzed on the mediastinum (window width, 350 HU; level, 40 HU) and lung (window width, 1,400 HU; level, –700 HU) settings. The reconstructions were made in 2mm-slice-thick and lung settings. The CT slice interval varies from 0.625 mm to 5 mm. Scanning mode includes plain, contrast and 3D reconstruction.

    Before the examination, the patient underwent fasting for at least 6 hours, and the blood glucose of each patient was less than 11 mmol/L. Whole-body emission scans were acquired 60 minutes after the intravenous injection of 18F-FDG (4.44MBq/kg, 0.12mCi/kg), with patients in the supine position in the PET scanner. FDG doses and uptake times were 168.72-468.79MBq (295.8±64.8MBq) and 27-171min (70.4±24.9 minutes), respectively. 18F-FDG with a radiochemical purity of 95% was provided. Patients were allowed to breathe normally during PET and CT acquisitions. Attenuation correction of PET images was performed using CT data with the hybrid segmentation method. Attenuation corrections were performed using a CT protocol (180mAs,120kV,1.0pitch). Each study comprised one CT volume, one PET volume and fused PET and CT images: the CT resolution was 512 × 512 pixels at 1mm × 1mm, the PET resolution was 200 × 200 pixels at 4.07mm × 4.07mm, with a slice thickness and an interslice distance of 1mm. Both volumes were reconstructed with the same number of slices. Three-dimensional (3D) emission and transmission scanning were acquired from the base of the skull to mid femur. The PET images were reconstructed via the TrueX TOF method with a slice thickness of 1mm.

    The location of each tumor was annotated by five academic thoracic radiologists with expertise in lung cancer to make this dataset a useful tool and resource for developing algorithms for medical diagnosis. Two of the radiologists had more than 15 years of experience and the others had more than 5 years of experience. After one of the radiologists labeled each subject the other four radiologists performed a verification, resulting in all five radiologists reviewing each annotation file in the dataset. Annotations were captured using Labellmg. The image annotations are saved as XML files in PASCAL VOC format, which can be parsed using the PASCAL Development Toolkit: https://pypi.org/project/pascal-voc-tools/. Python code to visualize the annotation boxes on top of the DICOM images can be downloaded here.

    Two deep learning researchers used the images and the corresponding annotation files to train several well-known detection models which resulted in a maximum a posteriori probability (MAP) of around 0.87 on the validation set.

  9. E

    OBIS - ARGOS Satellite Tracking of Animals

    • cwcgom.aoml.noaa.gov
    • data.amerigeoss.org
    • +1more
    Updated Jun 27, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2019). OBIS - ARGOS Satellite Tracking of Animals [Dataset]. https://cwcgom.aoml.noaa.gov/erddap/info/aadcArgos/index.html
    Explore at:
    Dataset updated
    Jun 27, 2019
    Area covered
    Earth
    Variables measured
    ID, Sex, time, Class, Genus, Notes, Order, County, Family, Phylum, and 67 more
    Description

    Various species have been tracked using ARGOS PTT trackers since the early 1990's. These include Emperor, King and Adelie pengiuns, Light-mantled Sooty, Grey-headed and Black-browed albatrosses, Antarctic and Australian fur seals, Southern Elephant Seal and Blue and Humpback whales. Note that not all data for any species or locations is or will be exposed to OBIS. Geographic coverage is from Heard Island to the west and Macquarie Island to the east and several islands near the southern end of Chile. The data has been filtered to remove most but not all erroneous positions.

    DiGIR is an engine which takes XML requests for data and returns a data subset stored as XML data (as defined in a schema). For more DiGIR information, see http://digir.sourceforge.net/ , http://diveintodigir.ecoforge.net/draft/digirdive.html , and http://digir.net/prov/prov_manual.html . A list of Digir providers is at http://bigdig.ecoforge.net/wiki/SchemaStatus .

    Darwin is the original schema for use with the DiGIR engine.

    The Ocean Biogeographic Information System (OBIS) schema extends Darwin. For more OBIS info, see http://www.iobis.org . See the OBIS schema at http://www.iobis.org/tech/provider/questions .

    Queries: Although OBIS datasets have many variables, most variables have few values. The only queries that are likely to succeed MUST include a constraint for Genus= and MAY include constraints for Species=, longitude, latitude, and time.

    Most OBIS datasets return a maximum of 1000 rows of data per request. The limitation is imposed by the OBIS administrators.

    Available Genera (and number of records): (error) cdm_data_type=Point citation=See the following Metadata records http://data.aad.gov.au/aadc/metadata/metadata_redirect.cfm?md=AMD/AU/DB_Argos_PTT_Tracking http://data.aad.gov.au/aadc/metadata/metadata_redirect.cfm?md=AMD/AU/HI_animaltracks_ARGOS http://data.aad.gov.au/aadc/metadata/metadata_redirect.cfm?md=AMD/AU/Tracking_BI http://data.aad.gov.au/aadc/metadata/metadata_redirect.cfm?md=AMD/AU/Tracking_Mag http://data.aad.gov.au/aadc/metadata/metadata_redirect.cfm?md=AMD/AU/STA_Bibliography http://data.aad.gov.au/aadc/metadata/metadata_redirect.cfm?md=AMD/AU/Tracking_SI http://data.aad.gov.au/aadc/metadata/metadata_redirect.cfm?md=AMD/AU/Tracking_EDP http://data.aad.gov.au/aadc/metadata/metadata_redirect.cfm?md=AMD/AU/Tracking_DD Contact Data Centre for help on citation details. Conventions=COARDS, CF-1.6, ACDD-1.3 Easternmost_Easting=180.0 featureType=Point geospatial_lat_max=90.0 geospatial_lat_min=-90.0 geospatial_lat_units=degrees_north geospatial_lon_max=180.0 geospatial_lon_min=-180.0 geospatial_lon_units=degrees_east geospatial_vertical_positive=up geospatial_vertical_units=m infoUrl=http://data.aad.gov.au/ institution=AADC Northernmost_Northing=90.0 sourceUrl=http://aadc-maps.aad.gov.au/digir/digir.php Southernmost_Northing=-90.0 standard_name_vocabulary=CF Standard Name Table v55 Westernmost_Easting=-180.0

  10. d

    Attractions - tourist information database

    • data.gov.tw
    csv, json, kml, shp +2
    Updated Jun 1, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tourism Administration, Ministry of Transportation and Communications (2025). Attractions - tourist information database [Dataset]. https://data.gov.tw/en/datasets/7777
    Explore at:
    shp, xml, 壓縮檔, csv, kml, jsonAvailable download formats
    Dataset updated
    Jun 1, 2025
    Dataset authored and provided by
    Tourism Administration, Ministry of Transportation and Communications
    License

    https://data.gov.tw/licensehttps://data.gov.tw/license

    Description

    The Ministry of Transportation and Tourism Bureau collects spatial tourism information published by various government agencies, including data on tourist attractions, activities, dining and accommodation, tourist service stations, trails, bicycle paths, etc., to provide comprehensive tourism GIS basic data for operators to enhance added value applications. For XML field descriptions of each dataset, please refer to the Tourism Data Standard V1.0 at https://media.taiwan.net.tw/Upload/TourismInformationStandardFormatV1.0.pdf; for Tourism Data Standard V2.0 data, please refer to https://media.taiwan.net.tw/Upload/TourismDataStandardV2.0.pdf.

  11. Metaclusters by DPCfam clustering of UniRef50 v 2017_07

    • zenodo.org
    • data.niaid.nih.gov
    application/gzip
    Updated Oct 30, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Elena Tea Russo; Elena Tea Russo; Federico Barone; Federico Barone (2022). Metaclusters by DPCfam clustering of UniRef50 v 2017_07 [Dataset]. http://doi.org/10.5281/zenodo.6900559
    Explore at:
    application/gzipAvailable download formats
    Dataset updated
    Oct 30, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Elena Tea Russo; Elena Tea Russo; Federico Barone; Federico Barone
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Metaclusters obtained from the DPCfam clustering of UniRef50, v. 2017_07.
    Metaclusters represent putative protein families automatically derived using the DPCfam method, as described in Unsupervised protein family classification by Density Peak clustering, Russo ET, 2020, PhD Thesis http://hdl.handle.net/20.500.11767/116345 . Supervisors: Alessandro Laio, Marco Punta.

    Visit also https://dpcfam.areasciencepark.it/ to easily navigate the data.

    VERSION 1.1 changes:

    • Added DPCfamB database, including all small metaclusters with 25<=N<50 seed sequences. DPCdamB files are named with the prefix B_
    • Added Alphafold representative based on AlphaFoldDB for each MC

    FILES DESCRIPTION:

    1) Standard DPCfam database

    • metaclusters_xml.tar.gz Metaclusters' seeds, unaligned in an xml table. Only MCs with seeds with 1) more than 50 elements and 2) average length larger than 50 a.a.s are reported. Metaclusters entries include also some statistical information about each MC (such as size, average length, low complexity fraction etc, ) and Pfam comparison (Dominant Architecture). A README file is included describing the data. A parser is included to transform XML data to space-separated tables. XML schema is included.
    • metaclusters_msas.tar.gz Metsclusters' multiple sequence alignments, in fasta format. Only MCs with seeds with 1) more than 50 elements and 2) average length larger than 50 a.a.s are reported .
    • metaclusters_hmms.tar.gz Metsclusters' profile-hmms. A ".hmm" file for each metacluser. Only MCs with seeds with 1) more than 50 elements and 2) average length larger than 50 a.a.s are reported .
    • all_metaclusters_hmm.tar.gz Collctive metaclusters' profile-hmm. A single .hmm file collecting all MC's profile-hmm. . Only MCs with seeds with 1) more than 50 elements and 2) average length larger than 50 a.a.s are reported
    • uniref50_annotated.xml.gz UniRef50 v.2017_07 database annotated with Pfam families and DPCfam metaclusters. A README file is included describing the data. A parser is included to transform XML data to space-separated tables. XML schema is included. XML schema is derived from uniprot's UniRef50 xml schema.

    2) DPCfamB database

    • B_metaclusters_xml.tar.gz Metaclusters' seeds, unaligned in an xml table. All metaclusters are listed. Metaclusters entries include also some statistical information about each MC (such as size, average length, low complexity fraction etc, ) and Pfam comparison (Dominant Architecture). A README file is included describing the data. A parser is included to transform XML data to space-separated tables. XML schema is included.
    • B_metaclusters_msas.tar.gz Metsclusters' multiple sequence alignments, in fasta format. Only MCs with seeds with 1) 25<=N<50 elements and 2) average length larger than 50 a.a.s are reported .
    • B_metaclusters_hmms.tar.gz Metsclusters' profile-hmms. A ".hmm" file for each metacluser. Only MCs with seeds with 1) 25<=N<50 elements and 2) average length larger than 50 a.a.s are reported .
    • B_ all_metaclusters_hmm.tar.gz Collctive metaclusters' profile-hmm. A single .hmm file collecting all MC's profile-hmm. . Only MCs with seeds with 1) 25<=N<50 elements and 2) average length larger than 50 a.a.s are reported

  12. d

    Catering - Tourism Information Database

    • data.gov.tw
    csv, json, kml, shp +2
    Updated Jun 1, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tourism Administration, Ministry of Transportation and Communications (2025). Catering - Tourism Information Database [Dataset]. https://data.gov.tw/en/datasets/7779
    Explore at:
    壓縮檔, kml, csv, json, shp, xmlAvailable download formats
    Dataset updated
    Jun 1, 2025
    Dataset authored and provided by
    Tourism Administration, Ministry of Transportation and Communications
    License

    https://data.gov.tw/licensehttps://data.gov.tw/license

    Description

    The Ministry of Transportation and Communications' Tourism Bureau collects spatial tourism information released by various government agencies, including data on tourist attractions, activities, dining and accommodation, tourism service locations, trails, bike paths, etc., providing comprehensive tourism GIS basic data for industry practitioners to add value. The XML field descriptions for each dataset are provided in Tourism Data Standard V1.0, please refer to https://media.taiwan.net.tw/Upload/TourismInformationStandardFormatV1.0.pdf; Tourism Data Standard V2.0, please refer to https://media.taiwan.net.tw/Upload/TourismDataStandardV2.0.pdf.

  13. i

    GON

    • integbio.jp
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yamaguchi University Faculty of Science, GON [Dataset]. https://integbio.jp/dbcatalog/en/record/nbdc00079?jtpl=56
    Explore at:
    Dataset provided by
    Yamaguchi University Faculty of Science
    Description

    GON is a software platform for biological pathway modeling and simulation. It is based on two architectures, hybrid functional Petri net (HFPN) and XML technology. Pathway models of HFPN are also explained in detail. Petri nets provide a method of describing concurrent systems for manufacturing systems and communication protocols and representing biological pathways graphically. Petri Net Pathways includes IL-1,G-protein and TPO signaling pathways as well as a new pathway model of p53 and related genes.

  14. C

    Reference framework and theoretical offer in Netex format

    • ckan.mobidatalab.eu
    Updated Sep 11, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Île-de-France Mobilités (2023). Reference framework and theoretical offer in Netex format [Dataset]. https://ckan.mobidatalab.eu/dataset/reference-and-theoretical-offer-in-netex-format
    Explore at:
    https://www.iana.org/assignments/media-types/text/turtle, https://www.iana.org/assignments/media-types/text/csv, https://www.iana.org/assignments/media-types/application/json, https://www.iana.org/assignments/media-types/application/vnd.openxmlformats-officedocument.spreadsheetml.sheet, https://www.iana.org/assignments/media-types/text/n3, https://www.iana.org/assignments/media-types/application/rdf+xml, https://www.iana.org/assignments/media-types/application/ld+json, https://www.iana.org/assignments/media-types/application/octet-streamAvailable download formats
    Dataset updated
    Sep 11, 2023
    Dataset provided by
    Île-de-France Mobilités
    License

    http://vvlibri.org/fr/licence/odbl-10/legalcode/unofficialhttp://vvlibri.org/fr/licence/odbl-10/legalcode/unofficial

    Description

    This dataset includes two XML files and a ZIP folder

    • Arrets_Netex.xml which describes the data from the Île-de-France Mobilités stops repository in NeTEx format.
    • Lignes_Netex.xml which describes the data from the Île-de-France Mobilités lines repository in NeTEx format.
    • offre_Netex.zip which describes the data from the theoretical offer of Île-de-France Mobilités in NeTEx format.
    < p style="font-family: sans-serif;">Attention, the offer_Netex.zip folder also contains files listing the stops (stops.xml) and lines (lines.xml). The data source for these files is the same as for the Arrets_Netex.xml and Lignes_Netex.xml files (Île-de-France Mobilités repositories). However, stops.xml and lines.xml have the following specific features:

    • Only the objects used in the theoretical offer are present in these files.
    • The structure of the files is slightly different from that of files directly from the Repositories

    The data available on this dataset is extracted from webservices repositories. The webservices will be opened later via the PRIM portal of Île-de-France Mobilités.

    Documentation

    Documentation relating to the transport standards of Île-de-France Mobilités is available.

    Focus NeTEx

    < p style="font-family: sans-serif;">NeTEx (Network Exchange) is a reference format for exchanging theoretical public transport offer data, defined at European level.

    To find out more

  15. c

    Data from: Corpus extraction tool LIST 1.3

    • clarin.si
    • live.european-language-grid.eu
    Updated Aug 28, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Luka Krsnik; Špela Arhar Holdt; Jaka Čibej; Kaja Dobrovoljc; Aleksander Ključevšek; Simon Krek; Marko Robnik-Šikonja (2024). Corpus extraction tool LIST 1.3 [Dataset]. https://clarin.si/repository/xmlui/handle/11356/1964
    Explore at:
    Dataset updated
    Aug 28, 2024
    Authors
    Luka Krsnik; Špela Arhar Holdt; Jaka Čibej; Kaja Dobrovoljc; Aleksander Ključevšek; Simon Krek; Marko Robnik-Šikonja
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    The LIST corpus extraction tool is a Java program for extracting lists from text corpora on the levels of characters, word parts, words, and word sets. It supports VERT and TEI P5 XML formats and outputs .CSV files that can be imported into Microsoft Excel or similar statistical processing software.

    Version 1.3 adds support for the KOST 2.0 Slovene Learner Corpus (http://hdl.handle.net/11356/1887) in XML format. It also allows program execution using the command line (see 00README.txt for details), and uses a later version of Java (tested using JDK 21). In addition, Windows users no longer need to have Java installed on their computers to run the program.

  16. g

    Reference and theoretical offer in Netex format | gimi9.com

    • gimi9.com
    Updated Jul 6, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Reference and theoretical offer in Netex format | gimi9.com [Dataset]. https://gimi9.com/dataset/eu_https-data-iledefrance-mobilites-fr-explore-dataset-referentiels-lignes-arrets-offre-netex-/
    Explore at:
    Dataset updated
    Jul 6, 2024
    License

    Open Database License (ODbL) v1.0https://www.opendatacommons.org/licenses/odbl/1.0/
    License information was derived automatically

    Description

    This dataset includes two XML files and a ZIP folder * Arrets_Netex.xml which describes data from the Île-de-France Mobilités stop repository in NeTEx format. * Lines_Netex.xml that describes data from the Île-de-France Mobilités line repository in NeTEx format. *offer_Netex.zip which describes the data of the theoretical offer of Île-de-France Mobilités in NeTEx format. Attention, the folder offers_Netex.zip also contains files listing stops (.xml stops) and rows (lines.xml). The data source for these files is the same as for Arrets_Netex.xml and Lignes_Netex.xml (References Île-de-France Mobilités). However, arrets.xml and.xml lines have the following specificities: * Only the objects used in the theoretical offer are present in these files. * The structure of the files is slightly different from that of files directly from the References The data available in this dataset is extracted from the web services of the repositories. The web services will be opened later via the PRIM portal of Île-de-France Mobilités. * * * * Documentation Documentation relating to Ile-de-France Mobilités transport repositories is available. * see documentation on repositories * see documentation describing the structure of arrets_Netex.xml * see documentation describing the line structure_Netex.xml * see documentation describing the structure of offer_Netex.zip * * * * Focus NeTEx NeTEx (Network Exchange) is a reference format for exchanging theoretical public transport supply data, defined at European level. More info * * * *

  17. d

    Hotel homestay - tourist information database

    • data.gov.tw
    壓縮檔
    Updated Jul 15, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tourism Administration, Ministry of Transportation and Communications (2025). Hotel homestay - tourist information database [Dataset]. https://data.gov.tw/en/datasets/7780
    Explore at:
    壓縮檔Available download formats
    Dataset updated
    Jul 15, 2025
    Dataset authored and provided by
    Tourism Administration, Ministry of Transportation and Communications
    License

    https://data.gov.tw/licensehttps://data.gov.tw/license

    Description

    The Ministry of Transportation and Communications Tourism Bureau collects spatial tourism information released by various government agencies, including information on tourist attractions, activities, dining and lodging, tourist service stations, hiking trails, bike paths, and other data, providing comprehensive tourism GIS basic data for operators to add value. The XML field descriptions for each dataset, tourism data standard V1.0 data, please refer to https://media.taiwan.net.tw/Upload/TourismInformationStandardFormatV1.0.pdf; tourism data standard V2.0 data, please refer to https://media.taiwan.net.tw/Upload/TourismDataStandardV2.0.pdf.

  18. n

    MIRIAM Resources

    • neuinfo.org
    • scicrunch.org
    • +2more
    Updated Jan 29, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    MIRIAM Resources [Dataset]. https://neuinfo.org/data/record/nlx_144509-1/SCR_006697/resolver
    Explore at:
    Dataset updated
    Jan 29, 2022
    Description

    A set of online services created in support of MIRIAM, a set of guidelines for the annotation and curation of computational models. The core of MIRIAM Resources is a catalogue of data types (namespaces corresponding to controlled vocabularies or databases), their URIs and the corresponding physical URLs or resources. Access to this data is made available via exports (XML) and Web Services (SOAP). MIRIAM Resources are developed and maintained under the BioModels.net initiative, and are free for use by all. MIRIAM Resources are composed of four components: a database, some Web Services, a Java library and this web application. * Database: The core of the system is a MySQL database. It allows us to store the data types (which can be controlled vocabularies or databases), their URIs and the corresponding physical URLs, and other details such as documentation and resource identifier patterns. Each entry contains a diverse set of details about the data type: official name and synonyms, root URI, pattern of identifiers, documentation, etc. Moreover, each data type can be associated with several resources (or physical locations). * Web Services: Programmatic access to the data is available via Web Services (based on Apache Axis and SOAP messages). In addition, REST-based services are currently being developed. This API allows one to not only resolve model annotations, but also to generate appropriate URIs, based upon the provision of a resource name and accession number. A list of available web services, and a WSDL are provided. A browser-based online demonstration of the Web Services is also available to try. * Java Library: A Java library is provided to access the Web Services. The documentation explains where to download it, its dependencies, and how to use it. * Web Application: A Web application, using an Apache Tomcat server, offers access to the whole data set via a Web browser. It is possible to browse by data type names as well as browse by tags. A search engine is also provided.

  19. g

    INSPIRE WFS Download service for the theme Hydrography-Network (HY NET) |...

    • gimi9.com
    Updated Dec 19, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). INSPIRE WFS Download service for the theme Hydrography-Network (HY NET) | gimi9.com [Dataset]. https://gimi9.com/dataset/eu_cz-cuzk-wfs-hy_net
    Explore at:
    Dataset updated
    Dec 19, 2024
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    The INSPIRE WFS Download Service for the theme Hydrography (HY) is the service that allows registered users to download repeatedly data using WFS 2.0.0 technology. The Download Service provides harmonized data INSPIRE theme Hydrography (HY)- application schema Hydro-Network corresponding with INSPIRE xml schema in version 4.0. Data are provided in format GML 3.2.1. and in the coordinate system ETRS89 / TM33 determined for INSPIRE to display datasets of large scales. This dataset of hydrography of the Czech Republic therefore has the unified design with other data created for this INSPIRE theme in frame of whole Europe. The base of the dataset is the Fundamental Base of Geographic Data of the Czech Republic (ZABAGED®). The service meets the Technical Guidance for the implementation of INSPIRE Download Services, version 3.1 and also the OGC Standard for WFS 2.0.0.

  20. d

    Hotel accommodation (Japanese version) - Tourist Information Database

    • data.gov.tw
    壓縮檔
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tourism Administration, Ministry of Transportation and Communications, Hotel accommodation (Japanese version) - Tourist Information Database [Dataset]. https://data.gov.tw/en/datasets/73281
    Explore at:
    壓縮檔Available download formats
    Dataset authored and provided by
    Tourism Administration, Ministry of Transportation and Communications
    License

    https://data.gov.tw/licensehttps://data.gov.tw/license

    Description

    The Ministry of Transportation and Communications' Tourism Bureau collects spatial tourism information released by various government agencies, including data on tourist attractions, activities, food and lodging, tourist service stations, trails, and bike paths, providing comprehensive tourism GIS basic data for value-added applications by businesses. The XML field descriptions for each data set, version 1.0 tourism data standard, can be found at https://media.taiwan.net.tw/Upload/TourismInformationStandardFormatV1.0.pdf; and version 2.0 tourism data standard at https://media.taiwan.net.tw/Upload/TourismDataStandardV2.0.pdf.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Global (2013). APRS World Database [Dataset]. https://data.wu.ac.at/odso/datahub_io/OGUzMTBhYzgtMjg5Yi00ZGRiLWI2Y2YtNmI3MmYwMTg5NjBk

APRS World Database

Explore at:
Dataset updated
Oct 10, 2013
Dataset provided by
Global
Description

[begin excerpt from Integrating the Aprsworld Database Into Your Application]

The aprsworld.net project was started in March 2001 by James Jefferson Jarvis, KB0THN. The goal from the beginning has ben to parse the APRS internet stream into data that can be stored in a relational database system.

As the time of writing (September 2003) about 1 million raw APRS packets traverse the internet stream each day. Each one of these packets is parsed and inserted into the appropriate table of the aprsworld.net database. These results in about 5 million inserts a day, with an average of about 60 inserts / queries per second. The database grows by about 6 gigabytes per month.

By using the aprsworld.net database you can save the trouble of collecting, parsing, and storing this large ammount of data. Simple operations like finding the last position of a APRS station are extremely easy - and more complex dataminning operations are possible with minimum effort.

[end excerpt]

aprsworld-to-XML Interface

This script provides an XML interface to aprsworld.net, so you don't need to have direct access to the aprsworld or findu databases, or know SQL, in order to get generalized and standardly formatted APRS data directly from the Internet into your application. Free code libraries for parsing XML are easy to find for almost any programming environment.

As new minor versions of this script are made available, they will reside in their own directory containing the version number, so you can safely link to a script without future upgrade changes affecting anything. (bugfix-level versions will not have their own directory)

Search
Clear search
Close search
Google apps
Main menu