88 datasets found
  1. Council Tax: property attributes (England and Wales)

    • gov.uk
    Updated Jun 26, 2014
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Valuation Office Agency (2014). Council Tax: property attributes (England and Wales) [Dataset]. https://www.gov.uk/government/statistics/council-tax-property-attributes
    Explore at:
    Dataset updated
    Jun 26, 2014
    Dataset provided by
    GOV.UKhttp://gov.uk/
    Authors
    Valuation Office Agency
    Area covered
    Wales, England
    Description

    The first set of tables show, for each domestic property type in each geographic area, the number of properties assigned to each council tax band.

    The second set of tables provides a breakdown of domestic properties to a lower geographic level – Lower layer Super Output Area or ‘LSOA’, categorised by property type.

    The third set of tables shows, for each property build period in each geographic area, the number of properties assigned to each council tax band.

    The fourth set of tables provides a breakdown of domestic properties to a lower geographic level – Lower layer Super Output Area or ’LSOA‘, categorised by the property build period.

    The counts are calculated from domestic property data for England and Wales extracted from the Valuation Office Agency’s administrative database on 31 March 2014. Data on property types and number of bedrooms have been used to form property categories by which to view the data. Data on build period has been used to create property build period categories.

    Counts in the tables are rounded to the nearest 10 with those below 5 recorded as negligible and appearing as ‘–‘

    If you have any questions or comments about this release please contact:

    The VOA statistics team

    Email mailto:statistics@voa.gov.uk">statistics@voa.gov.uk

    Archived versions of this release

    http://webarchive.nationalarchives.gov.uk/20140712003745/http://www.voa.gov.uk/corporate/statisticalReleases/120927-CouncilTAxPropertyAttributes.html" class="govuk-link">Council Tax property attributes - 27 September 2012
    http://webarchive.nationalarchives.gov.uk/20140712003745/http://www.voa.gov.uk/corporate/statisticalReleases/110901-CouncilTAxPropertyAttributes.html" class="govuk-link">Council Tax property attributes - 1 September 2011
    http://webarchive.nationalarchives.gov.uk/20140712003745/http://www.voa.gov.uk/corporate/statisticalReleases/DomesticPropertyAttributesIndex.html" class="govuk-link">Domestic property attributes 14 April 2011
    http://webarchive.nationalarchives.gov.uk/20110320170052/http://www.voa.gov.uk/publications/statistical_releases/CT-property-attributes-september-2010/CT-property-attribute-data-Sept-2010.html" class="govuk-link">Council Tax property attribute data 23 September 2010

  2. d

    Data from: lanai_geo - Geologic attributes of the coastal zone of Lanai,...

    • catalog.data.gov
    • data.usgs.gov
    • +4more
    Updated Jul 6, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). lanai_geo - Geologic attributes of the coastal zone of Lanai, Hawaii [Dataset]. https://catalog.data.gov/dataset/lanai-geo-geologic-attributes-of-the-coastal-zone-of-lanai-hawaii
    Explore at:
    Dataset updated
    Jul 6, 2024
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Area covered
    Lanai City, Hawaii, Lanai
    Description

    Geologic attributes of the coastal zone of Lanai, Hawaii

  3. Data from: Car Evaluation Data Set

    • hypi.ai
    zip
    Updated Sep 1, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ahiale Darlington (2017). Car Evaluation Data Set [Dataset]. https://hypi.ai/wp/wp-content/uploads/2019/10/car-evaluation-data-set/
    Explore at:
    zip(4775 bytes)Available download formats
    Dataset updated
    Sep 1, 2017
    Authors
    Ahiale Darlington
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    from: https://archive.ics.uci.edu/ml/datasets/car+evaluation

    1. Title: Car Evaluation Database

    2. Sources: (a) Creator: Marko Bohanec (b) Donors: Marko Bohanec (marko.bohanec@ijs.si) Blaz Zupan (blaz.zupan@ijs.si) (c) Date: June, 1997

    3. Past Usage:

      The hierarchical decision model, from which this dataset is derived, was first presented in

      M. Bohanec and V. Rajkovic: Knowledge acquisition and explanation for multi-attribute decision making. In 8th Intl Workshop on Expert Systems and their Applications, Avignon, France. pages 59-78, 1988.

      Within machine-learning, this dataset was used for the evaluation of HINT (Hierarchy INduction Tool), which was proved to be able to completely reconstruct the original hierarchical model. This, together with a comparison with C4.5, is presented in

      B. Zupan, M. Bohanec, I. Bratko, J. Demsar: Machine learning by function decomposition. ICML-97, Nashville, TN. 1997 (to appear)

    4. Relevant Information Paragraph:

      Car Evaluation Database was derived from a simple hierarchical decision model originally developed for the demonstration of DEX (M. Bohanec, V. Rajkovic: Expert system for decision making. Sistemica 1(1), pp. 145-157, 1990.). The model evaluates cars according to the following concept structure:

      CAR car acceptability . PRICE overall price . . buying buying price . . maint price of the maintenance . TECH technical characteristics . . COMFORT comfort . . . doors number of doors . . . persons capacity in terms of persons to carry . . . lug_boot the size of luggage boot . . safety estimated safety of the car

      Input attributes are printed in lowercase. Besides the target concept (CAR), the model includes three intermediate concepts: PRICE, TECH, COMFORT. Every concept is in the original model related to its lower level descendants by a set of examples (for these examples sets see http://www-ai.ijs.si/BlazZupan/car.html).

      The Car Evaluation Database contains examples with the structural information removed, i.e., directly relates CAR to the six input attributes: buying, maint, doors, persons, lug_boot, safety.

      Because of known underlying concept structure, this database may be particularly useful for testing constructive induction and structure discovery methods.

    5. Number of Instances: 1728 (instances completely cover the attribute space)

    6. Number of Attributes: 6

    7. Attribute Values:

      buying v-high, high, med, low maint v-high, high, med, low doors 2, 3, 4, 5-more persons 2, 4, more lug_boot small, med, big safety low, med, high

    8. Missing Attribute Values: none

    9. Class Distribution (number of instances per class)

      class N N[%]

      unacc 1210 (70.023 %) acc 384 (22.222 %) good 69 ( 3.993 %) v-good 65 ( 3.762 %)

  4. CAMELS: Catchment Attributes and MEteorology for Large-sample Studies

    • gdex.ucar.edu
    • data.ucar.edu
    • +1more
    Updated Jun 24, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mizukami, M.; Addor, N.; Blodgett, D.; Viger, R. J.; Bock, A.; Clark, Martyn; Sampson, Kevin; Newman, Andrew; Andrew Newman; GDEX Curator (2022). CAMELS: Catchment Attributes and MEteorology for Large-sample Studies [Dataset]. http://doi.org/10.5065/D6MW2F4D
    Explore at:
    Dataset updated
    Jun 24, 2022
    Dataset provided by
    University Corporation for Atmospheric Research
    Authors
    Mizukami, M.; Addor, N.; Blodgett, D.; Viger, R. J.; Bock, A.; Clark, Martyn; Sampson, Kevin; Newman, Andrew; Andrew Newman; GDEX Curator
    Time period covered
    Jan 1, 1980 - Dec 31, 2014
    Area covered
    Description

    The hydrometeorological time series together with the catchment attributes constitute the CAMELS dataset: Catchment Attributes and MEteorology for Large-sample Studies.

    TIME SERIES Data citation: A. Newman; K. Sampson; M. P. Clark; A. Bock; R. J. Viger; D. Blodgett, 2014. A large-sample watershed-scale hydrometeorological dataset for the contiguous USA. Boulder, CO: UCAR/NCAR. https://dx.doi.org/10.5065/D6MW2F4D

    Associated paper: A. J. Newman, M. P. Clark, K. Sampson, A. Wood, L. E. Hay, A. Bock, R. J. Viger, D. Blodgett, L. Brekke, J. R. Arnold, T. Hopson, and Q. Duan: Development of a large-sample watershed-scale hydrometeorological dataset for the contiguous USA: dataset characteristics and assessment of regional variability in hydrologic model performance. Hydrol. Earth Syst. Sci., 19, 209-223, doi:10.5194/hess-19-209-2015, 2015.

    We developed basin scale hydrometeorological forcing data for 671 basins in the United States Geological Survey’s Hydro-Climatic Data Network 2009 (HCDN-2009, Lins 2012) conterminous U.S. basin subset. Retrospective model forcings are derived from Daymet, NLDAS, and Maurer et al. (2002) Daymet and NLDAS forcing data run from 1 Jan 1980 to 31 Dec 2014, and Maurer run from 1 January 1980 to 31 December 2008. Model timeseries output is available for the same time periods as the forcing data. USGS streamflow data are also provided for all basins for all dates available in the 1 Jan to 31 Dec 2014 period. We then implemented the hydrologic model and calibration routine traditionally used by the NWS, the SNOW-17 and Sacramento soil moisture accounting (SAC-SMA) based hydrologic modeling system and the shuffled complex evolution (SCE) optimization approach (Duan et al. 1993).

    To retrieve the entire time series dataset, all five *.zip files should be downloaded. The basin_timeseries_v1p2_metForcing_obsFlow.zip file contains all the basin forcing data for all three meteorology products, observed streamflow, basin metadata, readme files, and basin shapefiles. The three modelOutput.zip files contain all the model output for the various forcing datasets denoted in the link names. Finally, the basin_set_full_res.zip file is a full resolution basin shapefile containing the original basin boundaries from the geospatial fabric.

    Note there are two versions of the basin shapefiles included in this dataset. The shapefile included with the basin forcing data was used to compute the basin forcing data and is a simplified representation of the basin boundaries which will include small holes in the interior of some basins where sub-basin HRU simplifications do not match. The full resolution shapefile does not have those discontinuities. The user can best determine which shapefile (or both) is appropriate for their needs.

    CATCHMENT ATTRIBUTES Data citation: Addor, A. Newman, M. Mizukami, and M. P. Clark, 2017. Catchment attributes for large-sample studies. Boulder, CO: UCAR/NCAR. https://doi.org/10.5065/D6G73C3Q

    Association paper: Addor, N., Newman, A. J., Mizukami, N. and Clark, M. P.: The CAMELS data set: catchment attributes and meteorology for large-sample studies, Hydrol. Earth Syst. Sci., 21, 5293–5313, doi:10.5194/hess-21-5293-2017, 2017.

    This dataset covers the same 671 catchments as the Large-Sample Hydrometeorological Dataset introduced by Newman et al. (2015). For each catchment, we characterized a wide range of attributes that influence catchment behavior and hydrological processes. Datasets characterizing these attributes have been available separately for some time, but comprehensive multivariate catchment scale assessments have so far been difficult, because these datasets typically have different spatial configurations, are stored in different archives, or use different data formats. By creating catchment scale estimates of these attributes, our aim is to simplify the assessment of their interrelationships.

    Topographic characteristics (e.g. elevation and slope) were retrieved from Newman et al. (2015). Climatic indices (e.g., aridity and frequency of dry days) and hydrological signatures (e.g., mean annual discharge and baseflow index) were computed using the time series provided by Newman et al. (2015). Soil characteristics (e.g., porosity and soil depth) were characterized using the STATSGO dataset and the Pelletier et al. (2016) dataset. Vegetation characteristics (e.g. the leaf area index and the rooting depth) were inferred using MODIS data. Geological characteristics (e.g., geologic class and the subsurface porosity) were computed using the GLiM and GLHYMPS datasets.

    An essential feature, that differentiates this dataset from similar ones, is that it both provides quantitative estimates of diverse catchment attributes, and involves assessments of the limitations of the data and methods used to compute those attributes (see Addor et al., 2017). The large number of catchments, combined with the diversity of their geophysical characteristics, makes these data well suited for large-sample studies and comparative hydrology.

  5. d

    Data from: Attributes for NHDPlus Catchments (Version 1.1) for the...

    • catalog.data.gov
    • data.cnra.ca.gov
    • +5more
    Updated Nov 1, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). Attributes for NHDPlus Catchments (Version 1.1) for the Conterminous United States: NLCD 2001 Land Use and Land Cover [Dataset]. https://catalog.data.gov/dataset/attributes-for-nhdplus-catchments-version-1-1-for-the-conterminous-united-states-nlcd-2001-ed76e
    Explore at:
    Dataset updated
    Nov 1, 2024
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Area covered
    Contiguous United States, United States
    Description

    This data set represents the estimated area of land use and land cover from the National Land Cover Dataset 2001 (LaMotte, 2008), compiled for every catchment of NHDPlus for the conterminous United States. The source data set represents land use and land cover for the conterminous United States for 2001. The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (http://www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). The NHDPlus Version 1.1 is an integrated suite of application-ready geospatial datasets that incorporates many of the best features of the National Hydrography Dataset (NHD) and the National Elevation Dataset (NED). The NHDPlus includes a stream network (based on the 1:100,00-scale NHD), improved networking, naming, and value-added attributes (VAAs). NHDPlus also includes elevation-derived catchments (drainage areas) produced using a drainage enforcement technique first widely used in New England, and thus referred to as "the New England Method." This technique involves "burning in" the 1:100,000-scale NHD and when available building "walls" using the National Watershed Boundary Dataset (WBD). The resulting modified digital elevation model (HydroDEM) is used to produce hydrologic derivatives that agree with the NHD and WBD. Over the past two years, an interdisciplinary team from the U.S. Geological Survey (USGS), and the U.S. Environmental Protection Agency (USEPA), and contractors, found that this method produces the best quality NHD catchments using an automated process (USEPA, 2007). The NHDPlus dataset is organized by 18 Production Units that cover the conterminous United States. The NHDPlus version 1.1 data are grouped by the U.S. Geologic Survey's Major River Basins (MRBs, Crawford and others, 2006). MRB1, covering the New England and Mid-Atlantic River basins, contains NHDPlus Production Units 1 and 2. MRB2, covering the South Atlantic-Gulf and Tennessee River basins, contains NHDPlus Production Units 3 and 6. MRB3, covering the Great Lakes, Ohio, Upper Mississippi, and Souris-Red-Rainy River basins, contains NHDPlus Production Units 4, 5, 7 and 9. MRB4, covering the Missouri River basins, contains NHDPlus Production Units 10-lower and 10-upper. MRB5, covering the Lower Mississippi, Arkansas-White-Red, and Texas-Gulf River basins, contains NHDPlus Production Units 8, 11 and 12. MRB6, covering the Rio Grande, Colorado and Great Basin River basins, contains NHDPlus Production Units 13, 14, 15 and 16. MRB7, covering the Pacific Northwest River basins, contains NHDPlus Production Unit 17. MRB8, covering California River basins, contains NHDPlus Production Unit 18.

  6. BlogFeedback Data Set

    • kaggle.com
    zip
    Updated Jul 15, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Julio Tentor (2022). BlogFeedback Data Set [Dataset]. https://www.kaggle.com/datasets/jtentor/blogfeedback-data-set
    Explore at:
    zip(2550651 bytes)Available download formats
    Dataset updated
    Jul 15, 2022
    Authors
    Julio Tentor
    Description

    Source:

    Krisztian Buza Budapest University of Technology and Economics buza '@' cs.bme.hu http://www.cs.bme.hu/~buza

    You can download a zip file from https://archive.ics.uci.edu/ml/datasets/BlogFeedback

    Data Set Information:

    This data originates from blog posts. The raw HTML-documents of the blog posts were crawled and processed.

    The prediction task associated with the data is the prediction of the number of comments in the upcoming 24 hours.

    In order to simulate this situation, we choose a basetime (in the past) and select the blog posts that were published at most 72 hours before the selected base date/time. Then, we calculate all the features of the selected blog posts from the information that was available at the basetime, therefore each instance corresponds to a blog post. The target is the number of comments that the blog post received in the next 24 hours relative to the base time.

    In the train data, the base times were in the years 2010 and 2011. In the test data the base times were in February and March 2012.

    This simulates the real-world situation in which training data from the past is available to predict events in the future.

    The train data was generated from different base times that may temporally overlap.

    Therefore, if you simply split the train into disjoint partitions, the underlying time intervals may overlap.

    Therefore, you should use the provided, temporally disjoint train and test splits in order to ensure that the evaluation is fair.

    ** Attribute Information:**

    1...50: Average, standard deviation, min, max and median of the Attributes 51...60 for the source of the current blog post. With source we mean the blog on which the post appeared. For example, myblog.blog.org would be the source of the post myblog.blog.org/post_2010_09_10

    51: Total number of comments before basetime 52: Number of comments in the last 24 hours before the base time 53: Let T1 denote the datetime 48 hours before basetime, Let T2 denote the datetime 24 hours before basetime. This attribute is the number of comments in the time period between T1 and T2 54: Number of comments in the first 24 hours after the publication of the blog post, but before basetime 55: The difference of Attribute 52 and Attribute 53 56...60: The same features as the attributes 51...55, but features 56...60 refer to the number of links (trackbacks), while features 51...55 refer to the number of comments. 61: The length of time between the publication of the blog post and base time 62: The length of the blog post 63...262: The 200 bag of words features for 200 frequent words of the text of the blog post 263...269: binary indicator features (0 or 1) for the weekday (Monday...Sunday) of the basetime 270...276: binary indicator features (0 or 1) for the weekday (Monday...Sunday) of the date of publication of the blog post 277: Number of parent pages: we consider a blog post P as a parent of blog post B, if B is a reply (trackback) to blog post P. 278...280: Minimum, maximum, average number of comments that the parents received 281: The target: the number of comments in the next 24 hours (relative to base time)

    ** Relevant Papers:**

    Buza, K. (2014). Feedback Prediction for Blogs. In Data Analysis, Machine Learning and Knowledge Discovery (pp. 145-152). Springer International Publishing (http://cs.bme.hu/~buza/pdfs/gfkl2012_blogs.pdf).

  7. M

    Parcels, Compiled from Opt-In Open Data Counties, Minnesota

    • gisdata.mn.gov
    fgdb, gpkg, html +2
    Updated May 13, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Geospatial Information Office (2025). Parcels, Compiled from Opt-In Open Data Counties, Minnesota [Dataset]. https://gisdata.mn.gov/dataset/plan-parcels-open
    Explore at:
    html, gpkg, fgdb, webapp, jpegAvailable download formats
    Dataset updated
    May 13, 2025
    Dataset provided by
    Geospatial Information Office
    Area covered
    Minnesota
    Description

    This dataset is a compilation of county parcel data from Minnesota counties that have opted-in for their parcel data to be included in this dataset.

    It includes the following 55 counties that have opted-in as of the publication date of this dataset: Aitkin, Anoka, Becker, Benton, Big Stone, Carlton, Carver, Cass, Chippewa, Chisago, Clay, Clearwater, Cook, Crow Wing, Dakota, Douglas, Fillmore, Grant, Hennepin, Houston, Isanti, Itasca, Jackson, Koochiching, Lac qui Parle, Lake, Lyon, Marshall, McLeod, Mille Lacs, Morrison, Mower, Murray, Norman, Olmsted, Otter Tail, Pennington, Pipestone, Polk, Pope, Ramsey, Renville, Rice, Saint Louis, Scott, Sherburne, Stearns, Stevens, Traverse, Waseca, Washington, Wilkin, Winona, Wright, and Yellow Medicine.

    If you represent a county not included in this dataset and would like to opt-in, please contact Heather Albrecht (Heather.Albrecht@hennepin.us), co-chair of the Minnesota Geospatial Advisory Council (GAC)’s Parcels and Land Records Committee's Open Data Subcommittee. County parcel data does not need to be in the GAC parcel data standard to be included. MnGeo will map the county fields to the GAC standard.

    County parcel data records have been assembled into a single dataset with a common coordinate system (UTM Zone 15) and common attribute schema. The county parcel data attributes have been mapped to the GAC parcel data standard for Minnesota: https://www.mngeo.state.mn.us/committee/standards/parcel_attrib/parcel_attrib.html

    This compiled parcel dataset was created using Python code developed by Minnesota state agency GIS professionals, and represents a best effort to map individual county source file attributes into the common attribute schema of the GAC parcel data standard. The attributes from counties are mapped to the most appropriate destination column. In some cases, the county source files included attributes that were not mapped to the GAC standard. Additionally, some county attribute fields were parsed and mapped to multiple GAC standard fields, such as a single line address. Each quarter, MnGeo provides a text file to counties that shows how county fields are mapped to the GAC standard. Additionally, this text file shows the fields that are not mapped to the standard and those that are parsed. If a county shares changes to how their data should be mapped, MnGeo updates the compilation. If you represent a county and would like to update how MnGeo is mapping your county attribute fields to this compiled dataset, please contact us.

    This dataset is a snapshot of parcel data, and the source date of the county data may vary. Users should consult County websites to see the most up-to-date and complete parcel data.

    There have been recent changes in date/time fields, and their processing, introduced by our software vendor. In some cases, this has resulted in date fields being empty. We are aware of the issue and are working to correct it for future parcel data releases.

    The State of Minnesota makes no representation or warranties, express or implied, with respect to the use or reuse of data provided herewith, regardless of its format or the means of its transmission. THE DATA IS PROVIDED “AS IS” WITH NO GUARANTEE OR REPRESENTATION ABOUT THE ACCURACY, CURRENCY, SUITABILITY, PERFORMANCE, MECHANTABILITY, RELIABILITY OR FITINESS OF THIS DATA FOR ANY PARTICULAR PURPOSE. This dataset is NOT suitable for accurate boundary determination. Contact a licensed land surveyor if you have questions about boundary determinations.

    DOWNLOAD NOTES: This dataset is only provided in Esri File Geodatabase and OGC GeoPackage formats. A shapefile is not available because the size of the dataset exceeds the limit for that format. The distribution version of the fgdb is compressed to help reduce the data footprint. QGIS users should consider using the Geopackage format for better results.

  8. Large Scale International Boundaries

    • catalog.data.gov
    • geodata.state.gov
    • +1more
    Updated Jul 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Department of State (Point of Contact) (2025). Large Scale International Boundaries [Dataset]. https://catalog.data.gov/dataset/large-scale-international-boundaries
    Explore at:
    Dataset updated
    Jul 22, 2025
    Dataset provided by
    United States Department of Statehttp://state.gov/
    Description

    Overview The Office of the Geographer and Global Issues at the U.S. Department of State produces the Large Scale International Boundaries (LSIB) dataset. The current edition is version 11.4 (published 24 February 2025). The 11.4 release contains updated boundary lines and data refinements designed to extend the functionality of the dataset. These data and generalized derivatives are the only international boundary lines approved for U.S. Government use. The contents of this dataset reflect U.S. Government policy on international boundary alignment, political recognition, and dispute status. They do not necessarily reflect de facto limits of control. National Geospatial Data Asset This dataset is a National Geospatial Data Asset (NGDAID 194) managed by the Department of State. It is a part of the International Boundaries Theme created by the Federal Geographic Data Committee. Dataset Source Details Sources for these data include treaties, relevant maps, and data from boundary commissions, as well as national mapping agencies. Where available and applicable, the dataset incorporates information from courts, tribunals, and international arbitrations. The research and recovery process includes analysis of satellite imagery and elevation data. Due to the limitations of source materials and processing techniques, most lines are within 100 meters of their true position on the ground. Cartographic Visualization The LSIB is a geospatial dataset that, when used for cartographic purposes, requires additional styling. The LSIB download package contains example style files for commonly used software applications. The attribute table also contains embedded information to guide the cartographic representation. Additional discussion of these considerations can be found in the Use of Core Attributes in Cartographic Visualization section below. Additional cartographic information pertaining to the depiction and description of international boundaries or areas of special sovereignty can be found in Guidance Bulletins published by the Office of the Geographer and Global Issues: https://data.geodata.state.gov/guidance/index.html Contact Direct inquiries to internationalboundaries@state.gov. Direct download: https://data.geodata.state.gov/LSIB.zip Attribute Structure The dataset uses the following attributes divided into two categories: ATTRIBUTE NAME | ATTRIBUTE STATUS CC1 | Core CC1_GENC3 | Extension CC1_WPID | Extension COUNTRY1 | Core CC2 | Core CC2_GENC3 | Extension CC2_WPID | Extension COUNTRY2 | Core RANK | Core LABEL | Core STATUS | Core NOTES | Core LSIB_ID | Extension ANTECIDS | Extension PREVIDS | Extension PARENTID | Extension PARENTSEG | Extension These attributes have external data sources that update separately from the LSIB: ATTRIBUTE NAME | ATTRIBUTE STATUS CC1 | GENC CC1_GENC3 | GENC CC1_WPID | World Polygons COUNTRY1 | DoS Lists CC2 | GENC CC2_GENC3 | GENC CC2_WPID | World Polygons COUNTRY2 | DoS Lists LSIB_ID | BASE ANTECIDS | BASE PREVIDS | BASE PARENTID | BASE PARENTSEG | BASE The core attributes listed above describe the boundary lines contained within the LSIB dataset. Removal of core attributes from the dataset will change the meaning of the lines. An attribute status of “Extension” represents a field containing data interoperability information. Other attributes not listed above include “FID”, “Shape_length” and “Shape.” These are components of the shapefile format and do not form an intrinsic part of the LSIB. Core Attributes The eight core attributes listed above contain unique information which, when combined with the line geometry, comprise the LSIB dataset. These Core Attributes are further divided into Country Code and Name Fields and Descriptive Fields. County Code and Country Name Fields “CC1” and “CC2” fields are machine readable fields that contain political entity codes. These are two-character codes derived from the Geopolitical Entities, Names, and Codes Standard (GENC), Edition 3 Update 18. “CC1_GENC3” and “CC2_GENC3” fields contain the corresponding three-character GENC codes and are extension attributes discussed below. The codes “Q2” or “QX2” denote a line in the LSIB representing a boundary associated with areas not contained within the GENC standard. The “COUNTRY1” and “COUNTRY2” fields contain the names of corresponding political entities. These fields contain names approved by the U.S. Board on Geographic Names (BGN) as incorporated in the ‘"Independent States in the World" and "Dependencies and Areas of Special Sovereignty" lists maintained by the Department of State. To ensure maximum compatibility, names are presented without diacritics and certain names are rendered using common cartographic abbreviations. Names for lines associated with the code "Q2" are descriptive and not necessarily BGN-approved. Names rendered in all CAPITAL LETTERS denote independent states. Names rendered in normal text represent dependencies, areas of special sovereignty, or are otherwise presented for the convenience of the user. Descriptive Fields The following text fields are a part of the core attributes of the LSIB dataset and do not update from external sources. They provide additional information about each of the lines and are as follows: ATTRIBUTE NAME | CONTAINS NULLS RANK | No STATUS | No LABEL | Yes NOTES | Yes Neither the "RANK" nor "STATUS" fields contain null values; the "LABEL" and "NOTES" fields do. The "RANK" field is a numeric expression of the "STATUS" field. Combined with the line geometry, these fields encode the views of the United States Government on the political status of the boundary line. ATTRIBUTE NAME | | VALUE | RANK | 1 | 2 | 3 STATUS | International Boundary | Other Line of International Separation | Special Line A value of “1” in the “RANK” field corresponds to an "International Boundary" value in the “STATUS” field. Values of ”2” and “3” correspond to “Other Line of International Separation” and “Special Line,” respectively. The “LABEL” field contains required text to describe the line segment on all finished cartographic products, including but not limited to print and interactive maps. The “NOTES” field contains an explanation of special circumstances modifying the lines. This information can pertain to the origins of the boundary lines, limitations regarding the purpose of the lines, or the original source of the line. Use of Core Attributes in Cartographic Visualization Several of the Core Attributes provide information required for the proper cartographic representation of the LSIB dataset. The cartographic usage of the LSIB requires a visual differentiation between the three categories of boundary lines. Specifically, this differentiation must be between: International Boundaries (Rank 1); Other Lines of International Separation (Rank 2); and Special Lines (Rank 3). Rank 1 lines must be the most visually prominent. Rank 2 lines must be less visually prominent than Rank 1 lines. Rank 3 lines must be shown in a manner visually subordinate to Ranks 1 and 2. Where scale permits, Rank 2 and 3 lines must be labeled in accordance with the “Label” field. Data marked with a Rank 2 or 3 designation does not necessarily correspond to a disputed boundary. Please consult the style files in the download package for examples of this depiction. The requirement to incorporate the contents of the "LABEL" field on cartographic products is scale dependent. If a label is legible at the scale of a given static product, a proper use of this dataset would encourage the application of that label. Using the contents of the "COUNTRY1" and "COUNTRY2" fields in the generation of a line segment label is not required. The "STATUS" field contains the preferred description for the three LSIB line types when they are incorporated into a map legend but is otherwise not to be used for labeling. Use of the “CC1,” “CC1_GENC3,” “CC2,” “CC2_GENC3,” “RANK,” or “NOTES” fields for cartographic labeling purposes is prohibited. Extension Attributes Certain elements of the attributes within the LSIB dataset extend data functionality to make the data more interoperable or to provide clearer linkages to other datasets. The fields “CC1_GENC3” and “CC2_GENC” contain the corresponding three-character GENC code to the “CC1” and “CC2” attributes. The code “QX2” is the three-character counterpart of the code “Q2,” which denotes a line in the LSIB representing a boundary associated with a geographic area not contained within the GENC standard. To allow for linkage between individual lines in the LSIB and World Polygons dataset, the “CC1_WPID” and “CC2_WPID” fields contain a Universally Unique Identifier (UUID), version 4, which provides a stable description of each geographic entity in a boundary pair relationship. Each UUID corresponds to a geographic entity listed in the World Polygons dataset. These fields allow for linkage between individual lines in the LSIB and the overall World Polygons dataset. Five additional fields in the LSIB expand on the UUID concept and either describe features that have changed across space and time or indicate relationships between previous versions of the feature. The “LSIB_ID” attribute is a UUID value that defines a specific instance of a feature. Any change to the feature in a lineset requires a new “LSIB_ID.” The “ANTECIDS,” or antecedent ID, is a UUID that references line geometries from which a given line is descended in time. It is used when there is a feature that is entirely new, not when there is a new version of a previous feature. This is generally used to reference countries that have dissolved. The “PREVIDS,” or Previous ID, is a UUID field that contains old versions of a line. This is an additive field, that houses all Previous IDs. A new version of a feature is defined by any change to the

  9. W

    Webis-Argument-Attributes

    • webis.de
    html
    Updated 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Khalid Al-Khatib (2020). Webis-Argument-Attributes [Dataset]. https://webis.de/data/webis-argument-attributes.html
    Explore at:
    htmlAvailable download formats
    Dataset updated
    2020
    Dataset provided by
    The Web Technology & Information Systems Network
    University of Groningen
    Authors
    Khalid Al-Khatib
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The topic-agnostic attributes compiled from the Ph.D. dissertation of Khalid Al-Khatib: Computational Analysis of Argumentation Strategies.

  10. d

    Data from: Attributes for NHDPlus Catchments (Version 1.1) for the...

    • catalog.data.gov
    • data.usgs.gov
    • +6more
    Updated Nov 28, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). Attributes for NHDPlus Catchments (Version 1.1) for the Conterminous United States: STATSGO Soil Characteristics [Dataset]. https://catalog.data.gov/dataset/attributes-for-nhdplus-catchments-version-1-1-for-the-conterminous-united-states-statsgo-s
    Explore at:
    Dataset updated
    Nov 28, 2024
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Area covered
    Contiguous United States, United States
    Description

    This data set represents estimated soil variables compiled for every catchment of NHDPlus for the conterminous United States. The variables included are cation exchange capacity, percent calcium carbonate, slope, water-table depth, soil thickness, hydrologic soil group, soil erodibility (k-factor), permeability, average water capacity, bulk density, percent organic material, percent clay, percent sand, and percent silt. The source data set is the State Soil ( STATSGO ) Geographic Database (Wolock, 1997). The NHDPlus Version 1.1 is an integrated suite of application-ready geospatial datasets that incorporates many of the best features of the National Hydrography Dataset (NHD) and the National Elevation Dataset (NED). The NHDPlus includes a stream network (based on the 1:100,00-scale NHD), improved networking, naming, and value-added attributes (VAAs). NHDPlus also includes elevation-derived catchments (drainage areas) produced using a drainage enforcement technique first widely used in New England, and thus referred to as "the New England Method." This technique involves "burning in" the 1:100,000-scale NHD and when available building "walls" using the National Watershed Boundary Dataset (WBD). The resulting modified digital elevation model (HydroDEM) is used to produce hydrologic derivatives that agree with the NHD and WBD. Over the past two years, an interdisciplinary team from the U.S. Geological Survey (USGS), and the U.S. Environmental Protection Agency (USEPA), and contractors, found that this method produces the best quality NHD catchments using an automated process (USEPA, 2007). The NHDPlus dataset is organized by 18 Production Units that cover the conterminous United States. The NHDPlus version 1.1 data are grouped by the U.S. Geologic Survey's Major River Basins (MRBs, Crawford and others, 2006). MRB1, covering the New England and Mid-Atlantic River basins, contains NHDPlus Production Units 1 and 2. MRB2, covering the South Atlantic-Gulf and Tennessee River basins, contains NHDPlus Production Units 3 and 6. MRB3, covering the Great Lakes, Ohio, Upper Mississippi, and Souris-Red-Rainy River basins, contains NHDPlus Production Units 4, 5, 7 and 9. MRB4, covering the Missouri River basins, contains NHDPlus Production Units 10-lower and 10-upper. MRB5, covering the Lower Mississippi, Arkansas-White-Red, and Texas-Gulf River basins, contains NHDPlus Production Units 8, 11 and 12. MRB6, covering the Rio Grande, Colorado and Great Basin River basins, contains NHDPlus Production Units 13, 14, 15 and 16. MRB7, covering the Pacific Northwest River basins, contains NHDPlus Production Unit 17. MRB8, covering California River basins, contains NHDPlus Production Unit 18.

  11. S

    Core Reference Data

    • six-group.com
    Updated Apr 19, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    SIX Group (2020). Core Reference Data [Dataset]. https://www.six-group.com/en/products-services/financial-information/delivery-methods/files/sixflex/core-reference-data.html
    Explore at:
    Dataset updated
    Apr 19, 2020
    Dataset provided by
    SIX Group
    Area covered
    Global
    Description

    The need for accurate, timely and complete Reference Data is vital for the efficient functioning of the financial ecosystem. With our Core Reference Data service, receive extensive information on a predefined set of data points across a broad range of asset classes to support the maintenance of your securities master database. The information on key data attributes enables compliance with regulatory & risk management requirements while maximizing operation efficiency.

  12. w

    ArcGIS Tool: Inserts file name into attribute table

    • data.wu.ac.at
    • datadiscoverystudio.org
    • +1more
    zip
    Updated Jun 24, 2013
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Department of the Interior (2013). ArcGIS Tool: Inserts file name into attribute table [Dataset]. https://data.wu.ac.at/schema/data_gov/MGZmNGZlM2EtYWEyNy00ODRmLTlhODctNGE2YmJlOWFiOGQ1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 24, 2013
    Dataset provided by
    Department of the Interior
    Description

    This ArcGIS model inserts a file name into a feature class attribute table. The tool allows an user to identify features by a field that reference the name of the original file. It is useful when an user have to merge multiple feature classes and needs to identify which layer the features come from.

  13. Soil and Landscape Grid National Soil Attribute Maps - Available Phosphorus...

    • researchdata.edu.au
    • data.csiro.au
    datadownload
    Updated Aug 28, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Peter Zund; Peter Zund (2024). Soil and Landscape Grid National Soil Attribute Maps - Available Phosphorus (3" resolution) - Release 1 [Dataset]. http://doi.org/10.25919/6QZH-B979
    Explore at:
    datadownloadAvailable download formats
    Dataset updated
    Aug 28, 2024
    Dataset provided by
    CSIROhttp://www.csiro.au/
    Authors
    Peter Zund; Peter Zund
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jan 1, 1970 - Jul 27, 2022
    Area covered
    Description

    This is Version 1 of the Australian Available Phosphorus product of the Soil and Landscape Grid of Australia.

    The map gives a modelled estimate of the spatial distribution of available phosphorus in soils across Australia.

    The Soil and Landscape Grid of Australia has produced a range of digital soil attribute products. Each product contains six digital soil attribute maps, and their upper and lower confidence limits, representing the soil attribute at six depths: 0-5cm, 5-15cm, 15-30cm, 30-60cm, 60-100cm and 100-200cm. These depths are consistent with the specifications of the GlobalSoilMap.net project (https://esoil.io/TERNLandscapes/Public/Pages/SLGA/Resources/GlobalSoilMap_specifications_december_2015_2.pdf). The digital soil attribute maps are in raster format at a resolution of 3 arc sec (~90 x 90 m pixels).

    Detailed information about the Soil and Landscape Grid of Australia can be found at - https://esoil.io/TERNLandscapes/Public/Pages/SLGA/index.html

    Attribute Definition: Available Phosphorus Units: mg/kg; Period (temporal coverage; approximately): 1970-2021; Spatial resolution: 3 arc seconds (approx 90m); Total number of gridded maps for this attribute: 18; Number of pixels with coverage per layer: 2007M (49200 * 40800); Data license : Creative Commons Attribution 4.0 (CC BY); Target data standard: GlobalSoilMap specifications; Format: Cloud Optimised GeoTIFF; Lineage: This dataset models the spatial distribution of Available Phosphorus using a commonly measured analyte, bicarbonate - extractable phosphorus (Colwell P) (Method 9B1 & 9B2 - Rayment and Lyons 2010). It provides estimates of Colwell P across Australia for each Global Soil Map (GSM) depth interval at a 3 arcsecond resolution (80 - 100m pixel depending on where in Australia). The data is supplied as single band GeoTiff rasters and includes the 5th, 50th and 95th percentile predictions (Based on a 90% confidence interval) for each GSM depth.

    Legacy Colwell P data currently stored in government agency soil databases in Australia that are from non-fertilised, non-cropped relatively undisturbed sites is being used to estimate AP. No new P data was collected for this project. Agency data was accessed using the newly developed Soil Data Federator Web API (Searle, pers.coms.). The Cowell P point data was combined with environmental covariates from the TERN national set to build a model of how Cowell P varies across Australia. Covariates were selected that best reflected the geography, geology, and climate of Australia. The model was built using the machine learning algorithm, Random Forests, which is commonly used in digital soil mapping in Australia.

    All processing for the generation of these products was undertaken using the R programming language. R Core Team (2020). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL https://www.R-project.org/.

    Code - https://github.com/AusSoilsDSM/SLGA Observation data - https://esoil.io/TERNLandscapes/Public/Pages/SoilDataFederator/SoilDataFederator.html Covariate rasters - https://esoil.io/TERNLandscapes/Public/Pages/SLGA/GetData-COGSDataStore.html

  14. A dataset of Data Subject Access Request Packages

    • zenodo.org
    Updated Jul 5, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nicola Leschke; Nicola Leschke; Daniela Pöhn; Daniela Pöhn; Frank Pallas; Frank Pallas (2024). A dataset of Data Subject Access Request Packages [Dataset]. http://doi.org/10.5281/zenodo.11634938
    Explore at:
    Dataset updated
    Jul 5, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Nicola Leschke; Nicola Leschke; Daniela Pöhn; Daniela Pöhn; Frank Pallas; Frank Pallas
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Overview

    This dataset is a minimal example of Data Subject Access Request Packages (SARPs), as they can be retrieved under data protection laws, specifically the GDPR. It includes data from two data subjects, each with accounts for five major sevices, namely Amazon, Apple, Facebook, Google, and Linkedin.

    Purpose and Usage

    This dataset is meant to be an initial dataset that allows for manual exploration of structures and contents found in SARPs. Hence, the number of controllers and user profiles should be minimal but sufficient to allow cross-subject and cross-controller analysis. This dataset can be used to explore structures, formats and data types found in real-world SARPs. Thereby, the planning of future SARP-based research projects and studies shall be facilitated.

    We invite other researchers to use this dataset to explore the structure of SARPs. The envisioned primary usage includes the development of user-centric privacy interfaces and other technical contributions in the area of data access rights. Moreover, these packages can also be used for examplified data analyses, although no substantive research questions can be answered using this data. In particular, this data does not reflect how data subjects behave in real world. However, it is representative enough to give a first impression on the types of data analysis possible when using real world data.

    Data Generation

    In order to allow cross-subject analysis, while keeping the re-identification risk minimal, we used research-only accounts for the data generation. A detailed explanation of the data generation method can be found in the paper corresponding to the dataset, accepted for the Annual Privacy Forum 2024.

    In short, two user profiles were designed and corresponding accounts were created for each of the five services. Then, those accounts were used for two to four month. During the usage period, we minimized the amount of identifying data and also avoided interactions with data subjects not part of this research. Afterwards, we performed a data access request via the controller's web interface. Finally, the data was cleansed as described in detail in the acconpanying paper and in brief within the following section.

    Data Cleansing

    Before publication, both possibly identifying information and security relevant attributes need to be obfuscated or deleted. Moreover, multi-party data (especially messages with external entities) must be deleted. If data is obfuscated, we made sure to substitute multiple occurances of the same information with the same replacement.
    We provide a list of deleted and obfuscated items, the obfuscation scheme and, if applicable, the replacement.

    The list of obfuscated items looks like the following example:

    pathfiletypefilenameattributeschemereplacement
    linkedin\Linkedin_Basiccsvmessages.csvTOsemantic descriptionFirstname Lastname
    gooogle\Meine Aktivitäten\DatenexporthtmlMeineAktivitäten.htmlIP Addressloopback127.142.201.194
    facebook\personal_informationjsonprofile_information.jsonemailssemantic descriptionfirstname.lastname@gmail.com

    Data Characterization

    To give you an overview of the dataset, we publicly provide some meta-data about the usage time and SARP characteristics of exports from subject A/ subject B.

    providerusage time
    (in month)
    export optionsfile types# subfolders# filesexport size
    Amazon2/4all categoriesCSV (32/49)
    EML (2/5)
    JPEG (1/2)
    JSON (3/3)
    PDF (9/10)
    TXT (4/4)
    41/4951/731.2 MB / 1.4 MB
    Apple2/4all data
    max. 1 GB/ max. 4 GB
    CSV (8/3)20/18/371.8 KB / 294.8 KB
    Facebook2/4

    all data

    JSON/HTML

    on my computer

    JSON (39/0)
    HTML (0/63)
    TXT (29/28)
    JPG (0/4)
    PNG (1/15)
    GIF (7/7)
    45/7676/11712.3 MB / 13.5 MB
    Google2/4

    all data

    frequency once

    ZIP

    max. 4 GB

    HTML (8/11)
    CSV (10/13)
    JSON (27/28)
    TXT (14/14)
    PDF (1/1)
    MBOX (1/1)
    VCF (1/0)
    ICS (1/0)
    README (1/1)
    JPG (0/2)
    44/5164/711.54 MB /1.2 MB
    LinkedIn2/4all dataCSV (18/21)0/0 (part 1/2)
    0/0 (part 1/2)
    13/18
    19/21

    3.9 KB / 6.0 KB

    6.2 KB / 9.2 KB


    Authors

    This data collection was performed by Daniela Pöhn (Universität der Bundeswehr München, Germany), Frank Pallas and Nicola Leschke (Paris Lodron Universität Salzburg, Austria). For questions, please contact nicola.leschke@plus.ac.at.

    Accompanying Paper

    The dataset was collected according to the method presented in:
    Leschke, Pöhn, and Pallas (2024). "How to Drill Into Silos: Creating a Free-to-Use Dataset of Data Subject Access Packages". Accepted for Annual Privacy Forum 2024.

  15. U

    Data from: Attributes for NHDPlus Catchments (Version 1.1) for the...

    • data.usgs.gov
    • datadiscoverystudio.org
    • +4more
    Updated Aug 24, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    United States Geological Survey (2024). Attributes for NHDPlus Catchments (Version 1.1) for the Conterminous United States: 30-Year Average Annual Precipitation, 1971-2000 [Dataset]. http://doi.org/10.5066/P9J3E7CY
    Explore at:
    Dataset updated
    Aug 24, 2024
    Dataset authored and provided by
    United States Geological Surveyhttp://www.usgs.gov/
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Time period covered
    1971 - 2000
    Area covered
    Contiguous United States, United States
    Description

    This data set represents the 30-year (1971-2000) average annual precipitation in millimeters multiplied by 100 compiled for every catchment of NHDPlus for the conterminous United States. The source data were the "United States Average Monthly or Annual Precipitation, 1971 - 2000" raster dataset produced by the PRISM Group at Oregon State University.

    The NHDPlus Version 1.1 is an integrated suite of application-ready geospatial datasets that incorporates many of the best features of the National Hydrography Dataset (NHD) and the National Elevation Dataset (NED). The NHDPlus includes a stream network (based on the 1:100,00-scale NHD), improved networking, naming, and value-added attributes (VAAs). NHDPlus also includes elevation-derived catchments (drainage areas) produced using a drainage enforcement technique first widely used in New England, and thus referred to as "the New England Method." This technique involves "burning in" the 1:100,000-scale NHD and when availab ...

  16. Clinical Data Analytics Market

    • transparencymarketresearch.com
    csv, pdf
    Updated May 28, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Transparency Market Research (2024). Clinical Data Analytics Market [Dataset]. https://www.transparencymarketresearch.com/clinical-data-analytics-market.html
    Explore at:
    pdf, csvAvailable download formats
    Dataset updated
    May 28, 2024
    Dataset authored and provided by
    Transparency Market Research
    License

    https://www.transparencymarketresearch.com/privacy-policy.htmlhttps://www.transparencymarketresearch.com/privacy-policy.html

    Time period covered
    2024 - 2034
    Area covered
    Worldwide
    Description

    • The global industry was valued at US$ 15.5 Bn in 2023
    • It is expected to grow at a CAGR of 39.7% from 2024 to 2034 and reach US$ 614.7 Bn by the end of 2034

    Market Introduction

    AttributeDetail
    Clinical Data Analytics Market Drivers
    • Increase in Prevalence of Chronic Diseases
    • Requirement of Advanced Technologies in Healthcare Organizations

    Clinical Data Analytics Market Regional Insights

    AttributeDetail
    Leading RegionNorth America

    Global Clinical Data Analytics Market Snapshot

    AttributeDetail
    Market Size in 2023US$ 15.5 Bn
    Market Forecast (Value) in 2034US$ 614.7 Bn
    Growth Rate (CAGR)39.7%
    Forecast Period2024-2034
    Historical Data Available for2020-2022
    Quantitative UnitsUS$ Bn for Value
    Market AnalysisIt includes segment analysis as well as regional level analysis. Moreover, qualitative analysis includes drivers, restraints, opportunities, key trends, Porter’s Five Forces analysis, value chain analysis, and key trend analysis.
    Competition Landscape
    • Market share analysis by company (2023)
    • Company profiles section includes overview, product portfolio, sales footprint, key subsidiaries or distributors, strategy & recent developments, and key financials
    FormatElectronic (PDF) + Excel
    Market Segmentation
    • Component
      • Services
      • Solutions
    • Type
      • Prescriptive
      • Descriptive
      • Predictive
    • Deployment Type
      • On-premises
      • On-cloud
    • Application
      • Quality Improvement and Clinical Benchmarking
      • Clinical Decision Support
      • Regulatory Reporting and Compliance
      • Comparative Analytics/Comparative Effectiveness
      • Precision Health
    • End-user
      • Healthcare Payers
        • Public
        • Private
      • Healthcare Providers
        • Hospitals and Clinics
        • Ambulatory Surgical Centers
        • Diagnostic Imaging Centers
        • Others (Research Centers, etc.)
    Regions Covered
    • North America
    • Europe
    • Asia Pacific
    • Latin America
    • Middle East & Africa
    Countries Covered
    • U.S.
    • Canada
    • Germany
    • U.K.
    • France
    • Italy
    • Spain
    • China
    • India
    • Japan
    • Australia & New Zealand
    • Brazil
    • Mexico
    • South Africa
    • GCC
    Companies Profiled
    • CareEvolution
    • Veradigm
    • IQVIA
    • Oracle
    • Health Catalyst
    • IBM
    • InterSystems Corporation
    • Optum, Inc.
    • Koninklijke Philips N.V.
    • MedeAnalytics
    • Sisense
    Customization ScopeAvailable Upon Request
    PricingAvailable Upon Request

  17. Soil and Landscape Grid National Soil Attribute Maps - Sand (3" resolution)...

    • data.csiro.au
    • researchdata.edu.au
    • +1more
    Updated Aug 28, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Brendan Malone; Ross Searle (2024). Soil and Landscape Grid National Soil Attribute Maps - Sand (3" resolution) - Release 2 [Dataset]. http://doi.org/10.25919/rjmy-pa10
    Explore at:
    Dataset updated
    Aug 28, 2024
    Dataset provided by
    CSIROhttp://www.csiro.au/
    Authors
    Brendan Malone; Ross Searle
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jan 1, 1950 - Sep 13, 2021
    Area covered
    Dataset funded by
    CSIROhttp://www.csiro.au/
    TERN
    Department of Agriculture and Food of Western Australia
    NSW Office of Environment and Heritage
    Victorian Department of Environment and Primary Industries
    Northern Territory Department of Land Resource Management
    South Australia Department of Environment, Water and Natural Resources
    Qld Department Science, Information Technology, Innovation and the Arts
    Tasmania Department Primary Industries, Parks, Water and Environment
    The University of Sydney
    Geoscience Australia
    Description

    This is Version 2 of the Australian Soil Sand Content product of the Soil and Landscape Grid of Australia.

    It supersedes the Release 1 product that can be found at https://doi.org/10.4225/08/546F29646877E

    The map gives a modelled estimate of the spatial distribution of sand in soils across Australia.

    The Soil and Landscape Grid of Australia has produced a range of digital soil attribute products. Each product contains six digital soil attribute maps, and their upper and lower confidence limits, representing the soil attribute at six depths: 0-5cm, 5-15cm, 15-30cm, 30-60cm, 60-100cm and 100-200cm. These depths are consistent with the specifications of the GlobalSoilMap.net project (https://esoil.io/TERNLandscapes/Public/Pages/SLGA/Resources/GlobalSoilMap_specifications_december_2015_2.pdf). The digital soil attribute maps are in raster format at a resolution of 3 arc sec (~90 x 90 m pixels).

    Detailed information about the Soil and Landscape Grid of Australia can be found at - https://esoil.io/TERNLandscapes/Public/Pages/SLGA/index.html

    Attribute Definition: 20 um - 2 mm mass fraction of the < 2 mm soil material determined using the pipette method Units: %; Period (temporal coverage; approximately): 1950-2021; Spatial resolution: 3 arc seconds (approx 90m); Total number of gridded maps for this attribute: 18; Number of pixels with coverage per layer: 2007M (49200 * 40800); Data license : Creative Commons Attribution 4.0 (CC BY); Target data standard: GlobalSoilMap specifications; Format: Cloud Optimised GeoTIFF; Lineage: The approach, based on machine learning, predicts each soil texture fraction at 90 m grid cell resolution, at depths 0–5 cm, 5–15 cm, 15–30 cm, 30–60 cm, 60–100 cm and 100–200 cm. The approach accommodates uncertainty in converting field measurements to quantitative estimates of texture fractions. Existing methods of bootstrap resampling were exploited to predict uncertainties, which are expressed as 90% prediction intervals about the mean prediction at each grid cell. The models and the prediction uncertainties were assessed by an external validation dataset. Results were compared with Version 1 Soil and Landscape Grid of Australia (v1.SLGA) (Viscarra Rossel et al. 2015). All predictive and functional accuracy diagnostics demonstrate improvements compared with v1.SLGA. Improvements were noted for the sand and clay fraction mapping with average improvement of 3% and 2%, respectively, in the RMSE estimates. Marginal improvements were made for the silt fraction mapping, which was relatively difficult to predict. We also made comparisons with recently released World Soil Grid products (v2.WSG) and made similar conclusions.

    All processing for the generation of these products was undertaken using the R programming language. R Core Team (2020). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL https://www.R-project.org/.

    Code - https://github.com/AusSoilsDSM/SLGA Observation data - https://esoil.io/TERNLandscapes/Public/Pages/SoilDataFederator/SoilDataFederator.html Covariate rasters - https://esoil.io/TERNLandscapes/Public/Pages/SLGA/GetData-COGSDataStore.html

  18. M

    Metro Regional Parcel Dataset - (Updated Quarterly)

    • gisdata.mn.gov
    ags_mapserver, fgdb +4
    Updated Jul 24, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    MetroGIS (2025). Metro Regional Parcel Dataset - (Updated Quarterly) [Dataset]. https://gisdata.mn.gov/dataset/us-mn-state-metrogis-plan-regional-parcels
    Explore at:
    gpkg, shp, html, fgdb, jpeg, ags_mapserverAvailable download formats
    Dataset updated
    Jul 24, 2025
    Dataset provided by
    MetroGIS
    Description

    This dataset includes all 7 metro counties that have made their parcel data freely available without a license or fees.

    This dataset is a compilation of tax parcel polygon and point layers assembled into a common coordinate system from Twin Cities, Minnesota metropolitan area counties. No attempt has been made to edgematch or rubbersheet between counties. A standard set of attribute fields is included for each county. The attributes are the same for the polygon and points layers. Not all attributes are populated for all counties.

    NOTICE: The standard set of attributes changed to the MN Parcel Data Transfer Standard on 1/1/2019.
    https://www.mngeo.state.mn.us/committee/standards/parcel_attrib/parcel_attrib.html

    See section 5 of the metadata for an attribute summary.

    Detailed information about the attributes can be found in the Metro Regional Parcel Attributes document.

    The polygon layer contains one record for each real estate/tax parcel polygon within each county's parcel dataset. Some counties have polygons for each individual condominium, and others do not. (See Completeness in Section 2 of the metadata for more information.) The points layer includes the same attribute fields as the polygon dataset. The points are intended to provide information in situations where multiple tax parcels are represented by a single polygon. One primary example of this is the condominium, though some counties stacked polygons for condos. Condominiums, by definition, are legally owned as individual, taxed real estate units. Records for condominiums may not show up in the polygon dataset. The points for the point dataset often will be randomly placed or stacked within the parcel polygon with which they are associated.

    The polygon layer is broken into individual county shape files. The points layer is provided as both individual county files and as one file for the entire metro area.

    In many places a one-to-one relationship does not exist between these parcel polygons or points and the actual buildings or occupancy units that lie within them. There may be many buildings on one parcel and there may be many occupancy units (e.g. apartments, stores or offices) within each building. Additionally, no information exists within this dataset about residents of parcels. Parcel owner and taxpayer information exists for many, but not all counties.

    This is a MetroGIS Regionally Endorsed dataset.

    Additional information may be available from each county at the links listed below. Also, any questions or comments about suspected errors or omissions in this dataset can be addressed to the contact person at each individual county.

    Anoka = http://www.anokacounty.us/315/GIS
    Caver = http://www.co.carver.mn.us/GIS
    Dakota = http://www.co.dakota.mn.us/homeproperty/propertymaps/pages/default.aspx
    Hennepin = https://gis-hennepin.hub.arcgis.com/pages/open-data
    Ramsey = https://www.ramseycounty.us/your-government/open-government/research-data
    Scott = http://opendata.gis.co.scott.mn.us/
    Washington: http://www.co.washington.mn.us/index.aspx?NID=1606

  19. Large Scale International Boundaries

    • res1catalogd-o-tdatad-o-tgov.vcapture.xyz
    Updated Jul 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Department of State (Point of Contact) (2025). Large Scale International Boundaries [Dataset]. https://res1catalogd-o-tdatad-o-tgov.vcapture.xyz/dataset/large-scale-international-boundaries
    Explore at:
    Dataset updated
    Jul 22, 2025
    Dataset provided by
    United States Department of Statehttp://state.gov/
    Description

    Overview The Office of the Geographer and Global Issues at the U.S. Department of State produces the Large Scale International Boundaries (LSIB) dataset. The current edition is version 11.4 (published 24 February 2025). The 11.4 release contains updated boundary lines and data refinements designed to extend the functionality of the dataset. These data and generalized derivatives are the only international boundary lines approved for U.S. Government use. The contents of this dataset reflect U.S. Government policy on international boundary alignment, political recognition, and dispute status. They do not necessarily reflect de facto limits of control. National Geospatial Data Asset This dataset is a National Geospatial Data Asset (NGDAID 194) managed by the Department of State. It is a part of the International Boundaries Theme created by the Federal Geographic Data Committee. Dataset Source Details Sources for these data include treaties, relevant maps, and data from boundary commissions, as well as national mapping agencies. Where available and applicable, the dataset incorporates information from courts, tribunals, and international arbitrations. The research and recovery process includes analysis of satellite imagery and elevation data. Due to the limitations of source materials and processing techniques, most lines are within 100 meters of their true position on the ground. Cartographic Visualization The LSIB is a geospatial dataset that, when used for cartographic purposes, requires additional styling. The LSIB download package contains example style files for commonly used software applications. The attribute table also contains embedded information to guide the cartographic representation. Additional discussion of these considerations can be found in the Use of Core Attributes in Cartographic Visualization section below. Additional cartographic information pertaining to the depiction and description of international boundaries or areas of special sovereignty can be found in Guidance Bulletins published by the Office of the Geographer and Global Issues: https://res1datad-o-tgeodatad-o-tstated-o-tgov.vcapture.xyz/guidance/index.html Contact Direct inquiries to internationalboundaries@state.gov. Direct download: https://res1datad-o-tgeodatad-o-tstated-o-tgov.vcapture.xyz/LSIB.zip Attribute Structure The dataset uses the following attributes divided into two categories: ATTRIBUTE NAME | ATTRIBUTE STATUS CC1 | Core CC1_GENC3 | Extension CC1_WPID | Extension COUNTRY1 | Core CC2 | Core CC2_GENC3 | Extension CC2_WPID | Extension COUNTRY2 | Core RANK | Core LABEL | Core STATUS | Core NOTES | Core LSIB_ID | Extension ANTECIDS | Extension PREVIDS | Extension PARENTID | Extension PARENTSEG | Extension These attributes have external data sources that update separately from the LSIB: ATTRIBUTE NAME | ATTRIBUTE STATUS CC1 | GENC CC1_GENC3 | GENC CC1_WPID | World Polygons COUNTRY1 | DoS Lists CC2 | GENC CC2_GENC3 | GENC CC2_WPID | World Polygons COUNTRY2 | DoS Lists LSIB_ID | BASE ANTECIDS | BASE PREVIDS | BASE PARENTID | BASE PARENTSEG | BASE The core attributes listed above describe the boundary lines contained within the LSIB dataset. Removal of core attributes from the dataset will change the meaning of the lines. An attribute status of “Extension” represents a field containing data interoperability information. Other attributes not listed above include “FID”, “Shape_length” and “Shape.” These are components of the shapefile format and do not form an intrinsic part of the LSIB. Core Attributes The eight core attributes listed above contain unique information which, when combined with the line geometry, comprise the LSIB dataset. These Core Attributes are further divided into Country Code and Name Fields and Descriptive Fields. County Code and Country Name Fields “CC1” and “CC2” fields are machine readable fields that contain political entity codes. These are two-character codes derived from the Geopolitical Entities, Names, and Codes Standard (GENC), Edition 3 Update 18. “CC1_GENC3” and “CC2_GENC3” fields contain the corresponding three-character GENC codes and are extension attributes discussed below. The codes “Q2” or “QX2” denote a line in the LSIB representing a boundary associated with areas not contained within the GENC standard. The “COUNTRY1” and “COUNTRY2” fields contain the names of corresponding political entities. These fields contain names approved by the U.S. Board on Geographic Names (BGN) as incorporated in the ‘"Independent States in the World" and "Dependencies and Areas of Special Sovereignty" lists maintained by the Department of State. To ensure maximum compatibility, names are presented without diacritics and certain names are rendered using common cartographic abbreviations. Names for lines associated with the code "Q2" are descriptive and not necessarily BGN-approved. Names rendered in all CAPITAL LETTERS denote independent states. Names rendered in normal text represent dependencies, areas of special sovereignty, or are otherwise presented for the convenience of the user. Descriptive Fields The following text fields are a part of the core attributes of the LSIB dataset and do not update from external sources. They provide additional information about each of the lines and are as follows: ATTRIBUTE NAME | CONTAINS NULLS RANK | No STATUS | No LABEL | Yes NOTES | Yes Neither the "RANK" nor "STATUS" fields contain null values; the "LABEL" and "NOTES" fields do. The "RANK" field is a numeric expression of the "STATUS" field. Combined with the line geometry, these fields encode the views of the United States Government on the political status of the boundary line. ATTRIBUTE NAME | | VALUE | RANK | 1 | 2 | 3 STATUS | International Boundary | Other Line of International Separation | Special Line A value of “1” in the “RANK” field corresponds to an "International Boundary" value in the “STATUS” field. Values of ”2” and “3” correspond to “Other Line of International Separation” and “Special Line,” respectively. The “LABEL” field contains required text to describe the line segment on all finished cartographic products, including but not limited to print and interactive maps. The “NOTES” field contains an explanation of special circumstances modifying the lines. This information can pertain to the origins of the boundary lines, limitations regarding the purpose of the lines, or the original source of the line. Use of Core Attributes in Cartographic Visualization Several of the Core Attributes provide information required for the proper cartographic representation of the LSIB dataset. The cartographic usage of the LSIB requires a visual differentiation between the three categories of boundary lines. Specifically, this differentiation must be between: International Boundaries (Rank 1); Other Lines of International Separation (Rank 2); and Special Lines (Rank 3). Rank 1 lines must be the most visually prominent. Rank 2 lines must be less visually prominent than Rank 1 lines. Rank 3 lines must be shown in a manner visually subordinate to Ranks 1 and 2. Where scale permits, Rank 2 and 3 lines must be labeled in accordance with the “Label” field. Data marked with a Rank 2 or 3 designation does not necessarily correspond to a disputed boundary. Please consult the style files in the download package for examples of this depiction. The requirement to incorporate the contents of the "LABEL" field on cartographic products is scale dependent. If a label is legible at the scale of a given static product, a proper use of this dataset would encourage the application of that label. Using the contents of the "COUNTRY1" and "COUNTRY2" fields in the generation of a line segment label is not required. The "STATUS" field contains the preferred description for the three LSIB line types when they are incorporated into a map legend but is otherwise not to be used for labeling. Use of the “CC1,” “CC1_GENC3,” “CC2,” “CC2_GENC3,” “RANK,” or “NOTES” fields for cartographic labeling purposes is prohibited. Extension Attributes Certain elements of the attributes within the LSIB dataset extend data functionality to make the data more interoperable or to provide clearer linkages to other datasets. The fields “CC1_GENC3” and “CC2_GENC” contain the corresponding three-character GENC code to the “CC1” and “CC2” attributes. The code “QX2” is the three-character counterpart of the code “Q2,” which denotes a line in the LSIB representing a boundary associated with a geographic area not contained within the GENC standard. To allow for linkage between individual lines in the LSIB and World Polygons dataset, the “CC1_WPID” and “CC2_WPID” fields contain a Universally Unique Identifier (UUID), version 4, which provides a stable description of each geographic entity in a boundary pair relationship. Each UUID corresponds to a geographic entity listed in the World Polygons dataset. These fields allow for linkage between individual lines in the LSIB and the overall World Polygons dataset. Five additional fields in the LSIB expand on the UUID concept and either describe features that have changed across space and time or indicate relationships between previous versions of the feature. The “LSIB_ID” attribute is a UUID value that defines a specific instance of a feature. Any change to the feature in a lineset requires a new “LSIB_ID.” The “ANTECIDS,” or antecedent ID, is a UUID that references line geometries from which a given line is descended in time. It is used when there is a feature that is entirely new, not when there is a new version of a previous feature. This is generally used to reference countries that have dissolved. The “PREVIDS,” or Previous ID, is a UUID field that contains old versions of a line. This is an additive field, that houses all Previous IDs. A new

  20. Soil Survey Geographic (SSURGO) database for De Baca County Area, New Mexico...

    • gstore.unm.edu
    • gimi9.com
    • +1more
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Department of Agriculture, Natural Resources Conservation Service, Soil Survey Geographic (SSURGO) database for De Baca County Area, New Mexico [Dataset]. https://gstore.unm.edu/apps/rgisarchive/datasets/708f4837-9851-4826-a4c7-ab5de3706815/metadata/ISO-19115:2003.html
    Explore at:
    Dataset provided by
    United States Department of Agriculturehttp://usda.gov/
    Natural Resources Conservation Servicehttp://www.nrcs.usda.gov/
    Time period covered
    Nov 20, 2003
    Area covered
    De Baca County, West Bound -104.893 East Bound -103.946 North Bound 34.779 South Bound 33.996
    Description

    This data set is a digital soil survey and generally is the most detailed level of soil geographic data developed by the National Cooperative Soil Survey. The information was prepared by digitizing maps, by compiling information onto a planimetric correct base and digitizing, or by revising digitized maps using remotely sensed and other information. This data set consists of georeferenced digital map data and computerized attribute data. The map data are in a soil survey area extent format and include a detailed, field verified inventory of soils and miscellaneous areas that normally occur in a repeatable pattern on the landscape and that can be cartographically shown at the scale mapped. A special soil features layer (point and line features) is optional. This layer displays the location of features too small to delineate at the mapping scale, but they are large enough and contrasting enough to significantly influence use and management. The soil map units are linked to attributes in the National Soil Information System relational database, which gives the proportionate extent of the component soils and their properties.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Valuation Office Agency (2014). Council Tax: property attributes (England and Wales) [Dataset]. https://www.gov.uk/government/statistics/council-tax-property-attributes
Organization logo

Council Tax: property attributes (England and Wales)

Explore at:
4 scholarly articles cite this dataset (View in Google Scholar)
Dataset updated
Jun 26, 2014
Dataset provided by
GOV.UKhttp://gov.uk/
Authors
Valuation Office Agency
Area covered
Wales, England
Description

The first set of tables show, for each domestic property type in each geographic area, the number of properties assigned to each council tax band.

The second set of tables provides a breakdown of domestic properties to a lower geographic level – Lower layer Super Output Area or ‘LSOA’, categorised by property type.

The third set of tables shows, for each property build period in each geographic area, the number of properties assigned to each council tax band.

The fourth set of tables provides a breakdown of domestic properties to a lower geographic level – Lower layer Super Output Area or ’LSOA‘, categorised by the property build period.

The counts are calculated from domestic property data for England and Wales extracted from the Valuation Office Agency’s administrative database on 31 March 2014. Data on property types and number of bedrooms have been used to form property categories by which to view the data. Data on build period has been used to create property build period categories.

Counts in the tables are rounded to the nearest 10 with those below 5 recorded as negligible and appearing as ‘–‘

If you have any questions or comments about this release please contact:

The VOA statistics team

Email mailto:statistics@voa.gov.uk">statistics@voa.gov.uk

Archived versions of this release

http://webarchive.nationalarchives.gov.uk/20140712003745/http://www.voa.gov.uk/corporate/statisticalReleases/120927-CouncilTAxPropertyAttributes.html" class="govuk-link">Council Tax property attributes - 27 September 2012
http://webarchive.nationalarchives.gov.uk/20140712003745/http://www.voa.gov.uk/corporate/statisticalReleases/110901-CouncilTAxPropertyAttributes.html" class="govuk-link">Council Tax property attributes - 1 September 2011
http://webarchive.nationalarchives.gov.uk/20140712003745/http://www.voa.gov.uk/corporate/statisticalReleases/DomesticPropertyAttributesIndex.html" class="govuk-link">Domestic property attributes 14 April 2011
http://webarchive.nationalarchives.gov.uk/20110320170052/http://www.voa.gov.uk/publications/statistical_releases/CT-property-attributes-september-2010/CT-property-attribute-data-Sept-2010.html" class="govuk-link">Council Tax property attribute data 23 September 2010

Search
Clear search
Close search
Google apps
Main menu