[begin excerpt from Integrating the Aprsworld Database Into Your Application]
The aprsworld.net project was started in March 2001 by James Jefferson Jarvis, KB0THN. The goal from the beginning has ben to parse the APRS internet stream into data that can be stored in a relational database system.
As the time of writing (September 2003) about 1 million raw APRS packets traverse the internet stream each day. Each one of these packets is parsed and inserted into the appropriate table of the aprsworld.net database. These results in about 5 million inserts a day, with an average of about 60 inserts / queries per second. The database grows by about 6 gigabytes per month.
By using the aprsworld.net database you can save the trouble of collecting, parsing, and storing this large ammount of data. Simple operations like finding the last position of a APRS station are extremely easy - and more complex dataminning operations are possible with minimum effort.
[end excerpt]
This script provides an XML interface to aprsworld.net, so you don't need to have direct access to the aprsworld or findu databases, or know SQL, in order to get generalized and standardly formatted APRS data directly from the Internet into your application. Free code libraries for parsing XML are easy to find for almost any programming environment.
As new minor versions of this script are made available, they will reside in their own directory containing the version number, so you can safely link to a script without future upgrade changes affecting anything. (bugfix-level versions will not have their own directory)
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
PLBD (Protein Ligand Binding Database) table description XML file
=================================================================
General
-------
The provided ZIP archive contains an XML file "main-database-description.xml" with the description of all tables (VIEWS) that are exposed publicly at the PLBD server (https://plbd.org/). In the XML file, all columns of the visible tables are described, specifying their SQL types, measurement units, semantics, calculation formulae, SQL statements that can be used to generate values in these columns, and publications of the formulae derivations.
The XML file conforms to the published XSD schema created for descriptions of relational databases for specifications of scientific measurement data. The XSD schema ("relational-database_v2.0.0-rc.18.xsd") and all included sub-schemas are provided in the same archive for convenience. All XSD schemas are validated against the "XMLSchema.xsd" schema from the W3C consortium.
The ZIP file contains the excerpt from the files hosted in the https://plbd.org/ at the moment of submission of the PLBD database in the Scientific Data journal, and is provided to conform the journal policies. The current data and schemas should be fetched from the published URIs:
https://plbd.org/
https://plbd.org/doc/db/schemas
https://plbd.org/doc/xml/schemas
Software that is used to generate SQL schemas, RestfulDB metadata and the RestfulDB middleware that allows to publish the databases generated from the XML description on the Web are available at public Subversion repositories:
svn://www.crystallography.net/solsa-database-scripts
svn://saulius-grazulis.lt/restfuldb
Usage
-----
The unpacked ZIP file will create the "db/" directory with the tree layout given below. In addition to the database description file "main-database-description.xml", all XSD schemas necessary for validation of the XML file are provided. On a GNU/Linux operating system with a GNU Make package installed, the XML file validity can be checked by unpacking the ZIP file, entering the unpacked directory, and running 'make distclean; make'. For example, on a Linux Mint distribution, the following commands should work:
unzip main-database-description.zip
cd db/release/v0.10.0/tables/
sh -x dependencies/Linuxmint-20.1/install.sh
make distclean
make
If necessary, additional packages can be installed using the 'install.sh' script in the 'dependencies/' subdirectory corresponding to your operating system. As of the moment of writing, Debian-10 and Linuxmint-20.1 OSes are supported out of the box; similar OSes might work with the same 'install.sh' scripts. The installation scripts require to run package installation command under system administrator privileges, but they use *only* the standard system package manager, thus they should not put your system at risk. For validation and syntax checking, the 'rxp' and 'xmllint' programs are used.
The log files provided in the "outputs/validation" subdirectory contain validation logs obtained on the system where the XML files were last checked and should indicate validity of the provided XML file against the references schemas.
Layout of the archived file tree
--------------------------------
db/
└── release
└── v0.10.0
└── tables
├── Makeconfig-validate-xml
├── Makefile
├── Makelocal-validate-xml
├── dependencies
├── main-database-description.xml
├── outputs
└── schema
https://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/https://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/
The Lung Image Database Consortium image collection (LIDC-IDRI) consists of diagnostic and lung cancer screening thoracic computed tomography (CT) scans with marked-up annotated lesions. It is a web-accessible international resource for development, training, and evaluation of computer-assisted diagnostic (CAD) methods for lung cancer detection and diagnosis. Initiated by the National Cancer Institute (NCI), further advanced by the Foundation for the National Institutes of Health (FNIH), and accompanied by the Food and Drug Administration (FDA) through active participation, this public-private partnership demonstrates the success of a consortium founded on a consensus-based process.
Seven academic centers and eight medical imaging companies collaborated to create this data set which contains 1018 cases. Each subject includes images from a clinical thoracic CT scan and an associated XML file that records the results of a two-phase image annotation process performed by four experienced thoracic radiologists. In the initial blinded-read phase, each radiologist independently reviewed each CT scan and marked lesions belonging to one of three categories ("nodule > or =3 mm," "nodule <3 mm," and "non-nodule > or =3 mm"). In the subsequent unblinded-read phase, each radiologist independently reviewed their own marks along with the anonymized marks of the three other radiologists to render a final opinion. The goal of this process was to identify as completely as possible all lung nodules in each CT scan without requiring forced consensus.
Note : The TCIA team strongly encourages users to review pylidc and the Standardized representation of the TCIA LIDC-IDRI annotations using DICOM (DICOM-LIDC-IDRI-Nodules) of the annotations/segmentations included in this dataset before developing custom tools to analyze the XML version.
The DBLP computer science bibliography contains the metadata of over 1.8 million publications, written by over 1 million authors in several thousands of journals or conference proceedings series.
Although DBLP started with a focus on database systems and logic programming (hence the acronym), it has grown to cover all disciplines of computer science.
Resources list the full dump of the DBLP XML records (see http://dblp.uni-trier.de/xml/ - a simple DTD is available.
The paper "DBLP - Some Lessons Learned" documents technical details of this XML file. In the appendix "DBLP XML Requests" you may find the description of a primitive DBLP API.
As of 2011-12-09 this data is open (relased under ODC-By). See the license information in the Readme.txt and the announce post: http://openbiblio.net/2011/12/09/dblp-releases-its-1-8-million-bibliographic-records-as-open-data/
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Read the XML data on Baltimore restaurants from here: https://d396qusza40orc.cloudfront.net/getdata%2Fdata%2Frestaurants.xml
The input data consists of the corridor structure and the traffic demand data: The traffic demand data is obtained based on the public database from PeMS. Details can be found in http://pems.dot.ca.gov/. The road network is constructed and modified by netedit. A 10-mile-long freeway section of Interstate-80 Eastbound, with 6 junctions across the city of Davis, CA is selected to evaluate our VSL control strategies. This section has a series of recurrent bottlenecks and severe congestion occurs almost every day in the afternoon peak hours. These multiple bottlenecks are all “critical” along the path. Junction 70 is interconnected with SR-113, another freeway from the north. It introduces heavy merging traffic without metering. A vast lane drop from 6 to 3 lanes exists between Junction 71 and 72. With saturated mainline flow and extra ramp demand at Junction 75 and 78, the downstream traffic flow is sensitive to breakdown even with ramp metering activated in peak hours. Details can be found in https://sumo.dlr.de/docs/netedit.html. The output data is generated through the SUMO simulation. In this simulation, traffic control interface(TraCI) uses a TCP based client/server architecture to build connection with sumo, which is accessible to retrieve values of vehicles and detectors and then construct the VSL control models to get simulation results analysis. Details can be found in https://sumo.dlr.de/docs/TraCI.html. Input Data: vsl_I-80.net.xml: Definition of the 10-mile-long freeway section of Interstate-80 Eastbound network file connecting the city of Davis and West Sacrameto in California vsl_I-80.additionals.xml: Definition of induction loop detectors to capture the vehicle data in every simulation step vsl_I-80.flow.xml: Definition of 5 hours' traffic demand data(OD pairs) with three typical demand sets(light, medium and heavy) vsl_I-80.rou.xml: Vehicle routes and trip information using shortest path computation via duarouter function vsl_I-80.sumocfg.xml: Configuration file glues input files and makes it executable by SUMO Output Data: emissions_no_vsl.xml: The output which contains aggregated travel time, fuel consumption and pollutants without control strategy emissions_static.xml: Output based on the flow-based control strategy emissions_lqr.xml: Output based on the density-based LQR control strategy This project aims at reducing fuel consumption and greenhouse gas emission by applying variable speed control (VSL) strategies to the traffic corridors with multi-segment and multi-bottleneck. The dataset is composed of inputs and outputs of the SUMO simulation model via TraCI API. SUMO is a microscopic traffic simulation platform which allows to simulate a given traffic demand through a given network. Namely, the inputs consist of the vehicle trip data obtained from the PeMS database in a 10-mile long freeway section of Interstate-80 Eastbound where the outputs are simulation results of the aggregated average travel time, fuel consumption and carbon emissions under different VSL strategies with different optimal speed limit.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
We provide a dataset of images(.jpeg) with their corresponding annotations files(.xml) used to train a bird detection deep learning model. These images were collected from the live stream feeds of Cornell Lab of Ornithology (https://www.allaboutbirds.org/cams/) situated in 6 unique locations around the world as follows:
Treman bird feeding garden at the Cornell Ornithology Laboratory in Ithaca, New York. At this station, Axis P11448-LE cameras are used to capture the recordings from feeders perched on the edge of both Sapsucker Woods and its 10-acre ponds. This site mainly attracts forest species like chickadees (Poecile atricapillus), red-winged blackbirds (Agelaius phoeniceus), and woodpeckers (Picidae). A total of 2065 images were captured from this location.
Fort Davis in Western Texas, USA. At this site, a total of 30 hummingbird feeder cams are hosted at an elevation of over 5500 feet. From this site, 1440 images were captured.
Sachatamia Lodge in Mindo, Ecuador. This site has a live hummingbird feed watcher that attracts over 132 species of hummingbirds including: Fawn-breasted Brilliant, White-necked Jacobin, Purple-bibbed Whitetip, Violet-tailed Sylph, Velvet-purple Coronet, and many others. A total of 2063 images were captured from this location.
Morris County, New Jersey, USA. Feeders at this location attract over 39 species including Red-bellied Woodpecker, Red-winged Blackbird, Purple Finch, Blue Jay, Pine Siskin, Hairy Woodpecker, and others. Footage at this site is captured by an Axis P1448-LE Camera and Axis T8351 Microphone. A total of 1876 images were recorded from this site.
Canopy Lodge in El Valle de Anton, Panama. Over 158 bird species visit this location annually and these include Gray-headed Chachalaca, Ruddy Ground-Dove, White-tipped Dove, Green Hermit, and others. A total of 1600 images were captured.
Southeast tip of South Island, New Zealand. At this site, nearly 10000 seabirds visit this location annually and a total of 1548 images were captured.
The Cornell Lab of Ornithology is an institute dedicated to biodiversity conversation with the main focus on birds through research, citizen science, and education. The autoscreen software was used to capture the images from the live feeds and images of approximately 1 Megapixel (Joint Photographic Experts Group) JPEG-coloured images of resolution 1366 X 768 X 3 pixels were collected (https://sourceforge.net/projects/autoscreen/). The software took a new image every 30 seconds and was captured during different times of the day in order to avoid a sample-biased dataset. In total, 10592 images were collected for this study.
Files provided
Train.zip – contains 6779 image files(.jpeg) and 6779 annotation files (.xml)
Validation.zip – contains 1695 image files(.jpeg) and 1695 annotation files (.xml)
Test.zip –contains 2118 image files(.jpeg)
Scripts.zip - Contains scripts needed in manipulating the dataset like dataset partitioning, and creation of CSV and tfrecords files.
This dataset was used in the MSc thesis titled “Investigating automated bird detection from webcams using machine learning” by Alex Mirugwe, University of Cape Town – South Africa.
https://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/https://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/
This dataset consists of CT and PET-CT DICOM images of lung cancer subjects with XML Annotation files that indicate tumor location with bounding boxes. The images were retrospectively acquired from patients with suspicion of lung cancer, and who underwent standard-of-care lung biopsy and PET/CT. Subjects were grouped according to a tissue histopathological diagnosis. Patients with Names/IDs containing the letter 'A' were diagnosed with Adenocarcinoma, 'B' with Small Cell Carcinoma, 'E' with Large Cell Carcinoma, and 'G' with Squamous Cell Carcinoma.
The images were analyzed on the mediastinum (window width, 350 HU; level, 40 HU) and lung (window width, 1,400 HU; level, –700 HU) settings. The reconstructions were made in 2mm-slice-thick and lung settings. The CT slice interval varies from 0.625 mm to 5 mm. Scanning mode includes plain, contrast and 3D reconstruction.
Before the examination, the patient underwent fasting for at least 6 hours, and the blood glucose of each patient was less than 11 mmol/L. Whole-body emission scans were acquired 60 minutes after the intravenous injection of 18F-FDG (4.44MBq/kg, 0.12mCi/kg), with patients in the supine position in the PET scanner. FDG doses and uptake times were 168.72-468.79MBq (295.8±64.8MBq) and 27-171min (70.4±24.9 minutes), respectively. 18F-FDG with a radiochemical purity of 95% was provided. Patients were allowed to breathe normally during PET and CT acquisitions. Attenuation correction of PET images was performed using CT data with the hybrid segmentation method. Attenuation corrections were performed using a CT protocol (180mAs,120kV,1.0pitch). Each study comprised one CT volume, one PET volume and fused PET and CT images: the CT resolution was 512 × 512 pixels at 1mm × 1mm, the PET resolution was 200 × 200 pixels at 4.07mm × 4.07mm, with a slice thickness and an interslice distance of 1mm. Both volumes were reconstructed with the same number of slices. Three-dimensional (3D) emission and transmission scanning were acquired from the base of the skull to mid femur. The PET images were reconstructed via the TrueX TOF method with a slice thickness of 1mm.
The location of each tumor was annotated by five academic thoracic radiologists with expertise in lung cancer to make this dataset a useful tool and resource for developing algorithms for medical diagnosis. Two of the radiologists had more than 15 years of experience and the others had more than 5 years of experience. After one of the radiologists labeled each subject the other four radiologists performed a verification, resulting in all five radiologists reviewing each annotation file in the dataset. Annotations were captured using Labellmg. The image annotations are saved as XML files in PASCAL VOC format, which can be parsed using the PASCAL Development Toolkit: https://pypi.org/project/pascal-voc-tools/. Python code to visualize the annotation boxes on top of the DICOM images can be downloaded here.
Two deep learning researchers used the images and the corresponding annotation files to train several well-known detection models which resulted in a maximum a posteriori probability (MAP) of around 0.87 on the validation set.
Various species have been tracked using ARGOS PTT trackers since the early 1990's. These include Emperor, King and Adelie pengiuns, Light-mantled Sooty, Grey-headed and Black-browed albatrosses, Antarctic and Australian fur seals, Southern Elephant Seal and Blue and Humpback whales. Note that not all data for any species or locations is or will be exposed to OBIS. Geographic coverage is from Heard Island to the west and Macquarie Island to the east and several islands near the southern end of Chile. The data has been filtered to remove most but not all erroneous positions.
DiGIR is an engine which takes XML requests for data and returns a data subset stored as XML data (as defined in a schema). For more DiGIR information, see http://digir.sourceforge.net/ , http://diveintodigir.ecoforge.net/draft/digirdive.html , and http://digir.net/prov/prov_manual.html . A list of Digir providers is at http://bigdig.ecoforge.net/wiki/SchemaStatus .
Darwin is the original schema for use with the DiGIR engine.
The Ocean Biogeographic Information System (OBIS) schema extends Darwin. For more OBIS info, see http://www.iobis.org . See the OBIS schema at http://www.iobis.org/tech/provider/questions .
Queries: Although OBIS datasets have many variables, most variables have few values. The only queries that are likely to succeed MUST include a constraint for Genus= and MAY include constraints for Species=, longitude, latitude, and time.
Most OBIS datasets return a maximum of 1000 rows of data per request. The limitation is imposed by the OBIS administrators.
Available Genera (and number of records): (error) cdm_data_type=Point citation=See the following Metadata records http://data.aad.gov.au/aadc/metadata/metadata_redirect.cfm?md=AMD/AU/DB_Argos_PTT_Tracking http://data.aad.gov.au/aadc/metadata/metadata_redirect.cfm?md=AMD/AU/HI_animaltracks_ARGOS http://data.aad.gov.au/aadc/metadata/metadata_redirect.cfm?md=AMD/AU/Tracking_BI http://data.aad.gov.au/aadc/metadata/metadata_redirect.cfm?md=AMD/AU/Tracking_Mag http://data.aad.gov.au/aadc/metadata/metadata_redirect.cfm?md=AMD/AU/STA_Bibliography http://data.aad.gov.au/aadc/metadata/metadata_redirect.cfm?md=AMD/AU/Tracking_SI http://data.aad.gov.au/aadc/metadata/metadata_redirect.cfm?md=AMD/AU/Tracking_EDP http://data.aad.gov.au/aadc/metadata/metadata_redirect.cfm?md=AMD/AU/Tracking_DD Contact Data Centre for help on citation details. Conventions=COARDS, CF-1.6, ACDD-1.3 Easternmost_Easting=180.0 featureType=Point geospatial_lat_max=90.0 geospatial_lat_min=-90.0 geospatial_lat_units=degrees_north geospatial_lon_max=180.0 geospatial_lon_min=-180.0 geospatial_lon_units=degrees_east geospatial_vertical_positive=up geospatial_vertical_units=m infoUrl=http://data.aad.gov.au/ institution=AADC Northernmost_Northing=90.0 sourceUrl=http://aadc-maps.aad.gov.au/digir/digir.php Southernmost_Northing=-90.0 standard_name_vocabulary=CF Standard Name Table v55 Westernmost_Easting=-180.0
https://data.gov.tw/licensehttps://data.gov.tw/license
The Ministry of Transportation and Tourism Bureau collects spatial tourism information published by various government agencies, including data on tourist attractions, activities, dining and accommodation, tourist service stations, trails, bicycle paths, etc., to provide comprehensive tourism GIS basic data for operators to enhance added value applications. For XML field descriptions of each dataset, please refer to the Tourism Data Standard V1.0 at https://media.taiwan.net.tw/Upload/TourismInformationStandardFormatV1.0.pdf; for Tourism Data Standard V2.0 data, please refer to https://media.taiwan.net.tw/Upload/TourismDataStandardV2.0.pdf.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Metaclusters obtained from the DPCfam clustering of UniRef50, v. 2017_07.
Metaclusters represent putative protein families automatically derived using the DPCfam method, as described in Unsupervised protein family classification by Density Peak clustering, Russo ET, 2020, PhD Thesis http://hdl.handle.net/20.500.11767/116345 . Supervisors: Alessandro Laio, Marco Punta.
Visit also https://dpcfam.areasciencepark.it/ to easily navigate the data.
VERSION 1.1 changes:
FILES DESCRIPTION:
1) Standard DPCfam database
2) DPCfamB database
https://data.gov.tw/licensehttps://data.gov.tw/license
The Ministry of Transportation and Communications' Tourism Bureau collects spatial tourism information released by various government agencies, including data on tourist attractions, activities, dining and accommodation, tourism service locations, trails, bike paths, etc., providing comprehensive tourism GIS basic data for industry practitioners to add value. The XML field descriptions for each dataset are provided in Tourism Data Standard V1.0, please refer to https://media.taiwan.net.tw/Upload/TourismInformationStandardFormatV1.0.pdf; Tourism Data Standard V2.0, please refer to https://media.taiwan.net.tw/Upload/TourismDataStandardV2.0.pdf.
GON is a software platform for biological pathway modeling and simulation. It is based on two architectures, hybrid functional Petri net (HFPN) and XML technology. Pathway models of HFPN are also explained in detail. Petri nets provide a method of describing concurrent systems for manufacturing systems and communication protocols and representing biological pathways graphically. Petri Net Pathways includes IL-1,G-protein and TPO signaling pathways as well as a new pathway model of p53 and related genes.
http://vvlibri.org/fr/licence/odbl-10/legalcode/unofficialhttp://vvlibri.org/fr/licence/odbl-10/legalcode/unofficial
This dataset includes two XML files and a ZIP folder
The data available on this dataset is extracted from webservices repositories. The webservices will be opened later via the PRIM portal of Île-de-France Mobilités.
Documentation
Documentation relating to the transport standards of Île-de-France Mobilités is available.
Focus NeTEx
< p style="font-family: sans-serif;">NeTEx (Network Exchange) is a reference format for exchanging theoretical public transport offer data, defined at European level.Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
The LIST corpus extraction tool is a Java program for extracting lists from text corpora on the levels of characters, word parts, words, and word sets. It supports VERT and TEI P5 XML formats and outputs .CSV files that can be imported into Microsoft Excel or similar statistical processing software.
Version 1.3 adds support for the KOST 2.0 Slovene Learner Corpus (http://hdl.handle.net/11356/1887) in XML format. It also allows program execution using the command line (see 00README.txt for details), and uses a later version of Java (tested using JDK 21). In addition, Windows users no longer need to have Java installed on their computers to run the program.
Open Database License (ODbL) v1.0https://www.opendatacommons.org/licenses/odbl/1.0/
License information was derived automatically
This dataset includes two XML files and a ZIP folder * Arrets_Netex.xml which describes data from the Île-de-France Mobilités stop repository in NeTEx format. * Lines_Netex.xml that describes data from the Île-de-France Mobilités line repository in NeTEx format. *offer_Netex.zip which describes the data of the theoretical offer of Île-de-France Mobilités in NeTEx format. Attention, the folder offers_Netex.zip also contains files listing stops (.xml stops) and rows (lines.xml). The data source for these files is the same as for Arrets_Netex.xml and Lignes_Netex.xml (References Île-de-France Mobilités). However, arrets.xml and.xml lines have the following specificities: * Only the objects used in the theoretical offer are present in these files. * The structure of the files is slightly different from that of files directly from the References The data available in this dataset is extracted from the web services of the repositories. The web services will be opened later via the PRIM portal of Île-de-France Mobilités. * * * * Documentation Documentation relating to Ile-de-France Mobilités transport repositories is available. * see documentation on repositories * see documentation describing the structure of arrets_Netex.xml * see documentation describing the line structure_Netex.xml * see documentation describing the structure of offer_Netex.zip * * * * Focus NeTEx NeTEx (Network Exchange) is a reference format for exchanging theoretical public transport supply data, defined at European level. More info * * * *
https://data.gov.tw/licensehttps://data.gov.tw/license
The Ministry of Transportation and Communications Tourism Bureau collects spatial tourism information released by various government agencies, including information on tourist attractions, activities, dining and lodging, tourist service stations, hiking trails, bike paths, and other data, providing comprehensive tourism GIS basic data for operators to add value. The XML field descriptions for each dataset, tourism data standard V1.0 data, please refer to https://media.taiwan.net.tw/Upload/TourismInformationStandardFormatV1.0.pdf; tourism data standard V2.0 data, please refer to https://media.taiwan.net.tw/Upload/TourismDataStandardV2.0.pdf.
A set of online services created in support of MIRIAM, a set of guidelines for the annotation and curation of computational models. The core of MIRIAM Resources is a catalogue of data types (namespaces corresponding to controlled vocabularies or databases), their URIs and the corresponding physical URLs or resources. Access to this data is made available via exports (XML) and Web Services (SOAP). MIRIAM Resources are developed and maintained under the BioModels.net initiative, and are free for use by all. MIRIAM Resources are composed of four components: a database, some Web Services, a Java library and this web application. * Database: The core of the system is a MySQL database. It allows us to store the data types (which can be controlled vocabularies or databases), their URIs and the corresponding physical URLs, and other details such as documentation and resource identifier patterns. Each entry contains a diverse set of details about the data type: official name and synonyms, root URI, pattern of identifiers, documentation, etc. Moreover, each data type can be associated with several resources (or physical locations). * Web Services: Programmatic access to the data is available via Web Services (based on Apache Axis and SOAP messages). In addition, REST-based services are currently being developed. This API allows one to not only resolve model annotations, but also to generate appropriate URIs, based upon the provision of a resource name and accession number. A list of available web services, and a WSDL are provided. A browser-based online demonstration of the Web Services is also available to try. * Java Library: A Java library is provided to access the Web Services. The documentation explains where to download it, its dependencies, and how to use it. * Web Application: A Web application, using an Apache Tomcat server, offers access to the whole data set via a Web browser. It is possible to browse by data type names as well as browse by tags. A search engine is also provided.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The INSPIRE WFS Download Service for the theme Hydrography (HY) is the service that allows registered users to download repeatedly data using WFS 2.0.0 technology. The Download Service provides harmonized data INSPIRE theme Hydrography (HY)- application schema Hydro-Network corresponding with INSPIRE xml schema in version 4.0. Data are provided in format GML 3.2.1. and in the coordinate system ETRS89 / TM33 determined for INSPIRE to display datasets of large scales. This dataset of hydrography of the Czech Republic therefore has the unified design with other data created for this INSPIRE theme in frame of whole Europe. The base of the dataset is the Fundamental Base of Geographic Data of the Czech Republic (ZABAGED®). The service meets the Technical Guidance for the implementation of INSPIRE Download Services, version 3.1 and also the OGC Standard for WFS 2.0.0.
https://data.gov.tw/licensehttps://data.gov.tw/license
The Ministry of Transportation and Communications' Tourism Bureau collects spatial tourism information released by various government agencies, including data on tourist attractions, activities, food and lodging, tourist service stations, trails, and bike paths, providing comprehensive tourism GIS basic data for value-added applications by businesses. The XML field descriptions for each data set, version 1.0 tourism data standard, can be found at https://media.taiwan.net.tw/Upload/TourismInformationStandardFormatV1.0.pdf; and version 2.0 tourism data standard at https://media.taiwan.net.tw/Upload/TourismDataStandardV2.0.pdf.
[begin excerpt from Integrating the Aprsworld Database Into Your Application]
The aprsworld.net project was started in March 2001 by James Jefferson Jarvis, KB0THN. The goal from the beginning has ben to parse the APRS internet stream into data that can be stored in a relational database system.
As the time of writing (September 2003) about 1 million raw APRS packets traverse the internet stream each day. Each one of these packets is parsed and inserted into the appropriate table of the aprsworld.net database. These results in about 5 million inserts a day, with an average of about 60 inserts / queries per second. The database grows by about 6 gigabytes per month.
By using the aprsworld.net database you can save the trouble of collecting, parsing, and storing this large ammount of data. Simple operations like finding the last position of a APRS station are extremely easy - and more complex dataminning operations are possible with minimum effort.
[end excerpt]
This script provides an XML interface to aprsworld.net, so you don't need to have direct access to the aprsworld or findu databases, or know SQL, in order to get generalized and standardly formatted APRS data directly from the Internet into your application. Free code libraries for parsing XML are easy to find for almost any programming environment.
As new minor versions of this script are made available, they will reside in their own directory containing the version number, so you can safely link to a script without future upgrade changes affecting anything. (bugfix-level versions will not have their own directory)