15 datasets found
  1. Filter (Mature)

    • data-salemva.opendata.arcgis.com
    Updated Jul 3, 2014
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    esri_en (2014). Filter (Mature) [Dataset]. https://data-salemva.opendata.arcgis.com/items/1bdcdf930b4345dfb4db10f795e0c726
    Explore at:
    Dataset updated
    Jul 3, 2014
    Dataset provided by
    Esrihttp://esri.com/
    Authors
    esri_en
    Description

    Filter is a configurable app template that displays a map with an interactive filtered view of one or more feature layers. The application displays prompts and hints for attribute filter values which are used to locate specific features.Use CasesFilter displays an interactive dialog box for exploring the distribution of a single attribute or the relationship between different attributes. This is a good choice when you want to understand the distribution of different types of features within a layer, or create an experience where you can gain deeper insight into how the interaction of different variables affect the resulting map content.Configurable OptionsFilter can present a web map and be configured with the following options:Choose the web map used in the application.Provide a title and color theme. The default title is the web map name.Configure the ability for feature and location search.Define the filter experince and provide text to encourage user exploration of data by displaying additional values to choose as the filter text.Supported DevicesThis application is responsively designed to support use in browsers on desktops, mobile phones, and tablets.Data RequirementsRequires at least one layer with an interactive filter. See Apply Filters help topic for more details.Get Started This application can be created in the following ways:Click the Create a Web App button on this pageShare a map and choose to Create a Web AppOn the Content page, click Create - App - From Template Click the Download button to access the source code. Do this if you want to host the app on your own server and optionally customize it to add features or change styling.

  2. c

    ckanext-viewhelpers

    • catalog.civicdataecosystem.org
    Updated Jun 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). ckanext-viewhelpers [Dataset]. https://catalog.civicdataecosystem.org/dataset/ckanext-viewhelpers
    Explore at:
    Dataset updated
    Jun 4, 2025
    Description

    The ckanext-viewhelpers extension provides a collection of helper functions designed to simplify the creation of custom resource views in CKAN. Its primary focus is on enabling filtering functionality for visualizations, allowing users to interactively explore datasets by specifying criteria directly in the URL. This extension aims to facilitate the development of more dynamic and user-friendly data visualizations within the CKAN ecosystem. Key Features: Filtering Scheme: Implements a filtering syntax using URL query parameters to subset data displayed in visualizations. Filters are specified as Key:Value pairs, separated by pipes (|). Multiple values for the same key are treated as a logical OR, while different keys are treated as a logical AND. filters_form Module: Provides a CKAN module (currently under development) to assist developers in creating forms that allow users to define filter criteria for visualizations. This aims to standardize and streamline the process of building interactive filter interfaces. Example Implementations: The extension's repository highlights other CKAN extensions (ckanext-dashboard, ckanext-basiccharts, and ckanext-mapviews) as examples of how its filtering capabilities can be integrated into real-world applications. These extensions serve as reference points for developers seeking to implement similar functionality. Technical Integration: The ckanext-viewhelpers extension integrates with CKAN by providing helper functions that can be used within other extensions, particularly those that create resource views. Once installed and enabled in the CKAN configuration file, the helper functions become available to other plugins. The filtering scheme relies on parsing URL query parameters and applying the specified filters to the data used by the visualization. Benefits & Impact: By providing a standardized filtering scheme and helper functions, ckanext-viewhelpers aims to simplify the process of creating interactive data visualizations within CKAN. This can lead to: Improved User Experience: Users can more easily explore and analyze data by filtering it according to their specific needs and interests.

  3. Z

    Conceptualization of public data ecosystems

    • data.niaid.nih.gov
    Updated Sep 26, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Martin, Lnenicka (2024). Conceptualization of public data ecosystems [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_13842001
    Explore at:
    Dataset updated
    Sep 26, 2024
    Dataset provided by
    Anastasija, Nikiforova
    Martin, Lnenicka
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains data collected during a study "Understanding the development of public data ecosystems: from a conceptual model to a six-generation model of the evolution of public data ecosystems" conducted by Martin Lnenicka (University of Hradec Králové, Czech Republic), Anastasija Nikiforova (University of Tartu, Estonia), Mariusz Luterek (University of Warsaw, Warsaw, Poland), Petar Milic (University of Pristina - Kosovska Mitrovica, Serbia), Daniel Rudmark (Swedish National Road and Transport Research Institute, Sweden), Sebastian Neumaier (St. Pölten University of Applied Sciences, Austria), Karlo Kević (University of Zagreb, Croatia), Anneke Zuiderwijk (Delft University of Technology, Delft, the Netherlands), Manuel Pedro Rodríguez Bolívar (University of Granada, Granada, Spain).

    As there is a lack of understanding of the elements that constitute different types of value-adding public data ecosystems and how these elements form and shape the development of these ecosystems over time, which can lead to misguided efforts to develop future public data ecosystems, the aim of the study is: (1) to explore how public data ecosystems have developed over time and (2) to identify the value-adding elements and formative characteristics of public data ecosystems. Using an exploratory retrospective analysis and a deductive approach, we systematically review 148 studies published between 1994 and 2023. Based on the results, this study presents a typology of public data ecosystems and develops a conceptual model of elements and formative characteristics that contribute most to value-adding public data ecosystems, and develops a conceptual model of the evolutionary generation of public data ecosystems represented by six generations called Evolutionary Model of Public Data Ecosystems (EMPDE). Finally, three avenues for a future research agenda are proposed.

    This dataset is being made public both to act as supplementary data for "Understanding the development of public data ecosystems: from a conceptual model to a six-generation model of the evolution of public data ecosystems ", Telematics and Informatics*, and its Systematic Literature Review component that informs the study.

    Description of the data in this data set

    PublicDataEcosystem_SLR provides the structure of the protocol

    Spreadsheet#1 provides the list of results after the search over three indexing databases and filtering out irrelevant studies

    Spreadsheets #2 provides the protocol structure.

    Spreadsheets #3 provides the filled protocol for relevant studies.

    The information on each selected study was collected in four categories:(1) descriptive information,(2) approach- and research design- related information,(3) quality-related information,(4) HVD determination-related information

    Descriptive Information

    Article number

    A study number, corresponding to the study number assigned in an Excel worksheet

    Complete reference

    The complete source information to refer to the study (in APA style), including the author(s) of the study, the year in which it was published, the study's title and other source information.

    Year of publication

    The year in which the study was published.

    Journal article / conference paper / book chapter

    The type of the paper, i.e., journal article, conference paper, or book chapter.

    Journal / conference / book

    Journal article, conference, where the paper is published.

    DOI / Website

    A link to the website where the study can be found.

    Number of words

    A number of words of the study.

    Number of citations in Scopus and WoS

    The number of citations of the paper in Scopus and WoS digital libraries.

    Availability in Open Access

    Availability of a study in the Open Access or Free / Full Access.

    Keywords

    Keywords of the paper as indicated by the authors (in the paper).

    Relevance for our study (high / medium / low)

    What is the relevance level of the paper for our study

    Approach- and research design-related information

    Approach- and research design-related information

    Objective / Aim / Goal / Purpose & Research Questions

    The research objective and established RQs.

    Research method (including unit of analysis)

    The methods used to collect data in the study, including the unit of analysis that refers to the country, organisation, or other specific unit that has been analysed such as the number of use-cases or policy documents, number and scope of the SLR etc.

    Study’s contributions

    The study’s contribution as defined by the authors

    Qualitative / quantitative / mixed method

    Whether the study uses a qualitative, quantitative, or mixed methods approach?

    Availability of the underlying research data

    Whether the paper has a reference to the public availability of the underlying research data e.g., transcriptions of interviews, collected data etc., or explains why these data are not openly shared?

    Period under investigation

    Period (or moment) in which the study was conducted (e.g., January 2021-March 2022)

    Use of theory / theoretical concepts / approaches? If yes, specify them

    Does the study mention any theory / theoretical concepts / approaches? If yes, what theory / concepts / approaches? If any theory is mentioned, how is theory used in the study? (e.g., mentioned to explain a certain phenomenon, used as a framework for analysis, tested theory, theory mentioned in the future research section).

    Quality-related information

    Quality concerns

    Whether there are any quality concerns (e.g., limited information about the research methods used)?

    Public Data Ecosystem-related information

    Public data ecosystem definition

    How is the public data ecosystem defined in the paper and any other equivalent term, mostly infrastructure. If an alternative term is used, how is the public data ecosystem called in the paper?

    Public data ecosystem evolution / development

    Does the paper define the evolution of the public data ecosystem? If yes, how is it defined and what factors affect it?

    What constitutes a public data ecosystem?

    What constitutes a public data ecosystem (components & relationships) - their "FORM / OUTPUT" presented in the paper (general description with more detailed answers to further additional questions).

    Components and relationships

    What components does the public data ecosystem consist of and what are the relationships between these components? Alternative names for components - element, construct, concept, item, helix, dimension etc. (detailed description).

    Stakeholders

    What stakeholders (e.g., governments, citizens, businesses, Non-Governmental Organisations (NGOs) etc.) does the public data ecosystem involve?

    Actors and their roles

    What actors does the public data ecosystem involve? What are their roles?

    Data (data types, data dynamism, data categories etc.)

    What data do the public data ecosystem cover (is intended / designed for)? Refer to all data-related aspects, including but not limited to data types, data dynamism (static data, dynamic, real-time data, stream), prevailing data categories / domains / topics etc.

    Processes / activities / dimensions, data lifecycle phases

    What processes, activities, dimensions and data lifecycle phases (e.g., locate, acquire, download, reuse, transform, etc.) does the public data ecosystem involve or refer to?

    Level (if relevant)

    What is the level of the public data ecosystem covered in the paper? (e.g., city, municipal, regional, national (=country), supranational, international).

    Other elements or relationships (if any)

    What other elements or relationships does the public data ecosystem consist of?

    Additional comments

    Additional comments (e.g., what other topics affected the public data ecosystems and their elements, what is expected to affect the public data ecosystems in the future, what were important topics by which the period was characterised etc.).

    New papers

    Does the study refer to any other potentially relevant papers?

    Additional references to potentially relevant papers that were found in the analysed paper (snowballing).

    Format of the file.xls, .csv (for the first spreadsheet only), .docx

    Licenses or restrictionsCC-BY

    For more info, see README.txt

  4. c

    ckanext-metaconf

    • catalog.civicdataecosystem.org
    Updated Jun 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). ckanext-metaconf [Dataset]. https://catalog.civicdataecosystem.org/dataset/ckanext-metaconf
    Explore at:
    Dataset updated
    Jun 4, 2025
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    The metaconf extension modifies the create, show, update, and export functionalities of CKAN datasets to incorporate the custom metadata schema. It allows the portal maintainer to define metadata fields as Python McBlock objects, which specify how the data should be collected from the user. The configurations made in settings.py directly influence the forms and data structures used within CKAN. Benefits & Impact: Improved Data Quality: Enforces a consistent metadata structure, leading to improved data quality and discoverability. Tailored Data Entry: Allows data entry forms to be customized to specific data domains, simplifying the process for users. Enhanced Data Discoverability: By defining specific metadata fields, the extension supports more effective data searching and filtering within the CKAN portal.

  5. c

    ckanext-datavic-reporting

    • catalog.civicdataecosystem.org
    Updated Jun 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). ckanext-datavic-reporting [Dataset]. https://catalog.civicdataecosystem.org/dataset/ckanext-datavic-reporting
    Explore at:
    Dataset updated
    Jun 4, 2025
    Description

    The ckanext-datavic-reporting extension is a custom reporting tool designed for Data.Vic, a CKAN-based data portal. It enables administrators to schedule and generate reports on organization statistics at preconfigured frequencies (e.g., monthly, yearly). This is achieved through a combination of report schedule entities that define the report parameters and automated report jobs that execute according to the defined schedules, providing insights into data usage and organizational performance. Key Features: Scheduled Report Generation: Allows automated generation of reports based on configurable schedules, eliminating the need for manual report creation. Configurable Reporting Frequencies: Supports multiple reporting frequencies (e.g., monthly, yearly), configurable via the CKAN admin interface, providing flexibility in reporting cadence. Organization-Specific Reporting: Generates reports specifically for individual organizations (identified by org_id), enabling targeted analysis of organizational data. Sub-Organization Reporting: Enables the inclusion of specific groups (sub-organizations) in the reports via a comma-separated list of sub_org_ids, providing a more granular view of the data landscape. Role-Based Reporting: Allows filtering of report data based on user roles, enabling analysis of data access and usage by different user groups. Report Output to File System: Saves generated reports as CSV files in a configurable directory structure, allowing for easy access and integration with other analytics tools. The file path includes the organization ID, year, month and report timestamp. API Endpoints for Management: Provides a set of API endpoints for managing report schedules, including creation, updating, deletion, and listing of schedules and jobs. These API functions provide the following functionality: datavic_reporting_schedule_create: Creates the scheduled report datavic_reporting_schedule_update: Updates existing records of schedule datavic_reporting_schedule_delete: Deletes schedule records datavic_reporting_schedule_list: Lists the report schedule datavic_reporting_job_list: Lists the report job Technical Integration: The extension integrates with CKAN as a plugin, adding new configuration settings, database migrations, and API endpoints. It leverages CKAN's plugin architecture to extend its functionality without modifying core CKAN code. DB migrations are required during installation to create necessary tables for storing report schedule information. The configurations include defining location of report storage, and frequencies. Benefits & Impact: By automating report generation, ckanext-datavic-reporting reduces the manual effort required to monitor organization statistics within Data.Vic. This helps administrators gain insights into data usage, identify trends, and make informed decisions to improve data governance and accessibility. The configurable frequencies and targeted reporting options allow for tailored analysis to meet specific organizational needs with flexible reporting schedule.

  6. Z

    A stakeholder-centered determination of High-Value Data sets: the use-case...

    • data.niaid.nih.gov
    Updated Oct 27, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anastasija Nikiforova (2021). A stakeholder-centered determination of High-Value Data sets: the use-case of Latvia [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_5142816
    Explore at:
    Dataset updated
    Oct 27, 2021
    Dataset authored and provided by
    Anastasija Nikiforova
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Latvia
    Description

    The data in this dataset were collected in the result of the survey of Latvian society (2021) aimed at identifying high-value data set for Latvia, i.e. data sets that, in the view of Latvian society, could create the value for the Latvian economy and society. The survey is created for both individuals and businesses. It being made public both to act as supplementary data for "Towards enrichment of the open government data: a stakeholder-centered determination of High-Value Data sets for Latvia" paper (author: Anastasija Nikiforova, University of Latvia) and in order for other researchers to use these data in their own work.

    The survey was distributed among Latvian citizens and organisations. The structure of the survey is available in the supplementary file available (see Survey_HighValueDataSets.odt)

    Description of the data in this data set: structure of the survey and pre-defined answers (if any) 1. Have you ever used open (government) data? - {(1) yes, once; (2) yes, there has been a little experience; (3) yes, continuously, (4) no, it wasn’t needed for me; (5) no, have tried but has failed} 2. How would you assess the value of open govenment data that are currently available for your personal use or your business? - 5-point Likert scale, where 1 – any to 5 – very high 3. If you ever used the open (government) data, what was the purpose of using them? - {(1) Have not had to use; (2) to identify the situation for an object or ab event (e.g. Covid-19 current state); (3) data-driven decision-making; (4) for the enrichment of my data, i.e. by supplementing them; (5) for better understanding of decisions of the government; (6) awareness of governments’ actions (increasing transparency); (7) forecasting (e.g. trendings etc.); (8) for developing data-driven solutions that use only the open data; (9) for developing data-driven solutions, using open data as a supplement to existing data; (10) for training and education purposes; (11) for entertainment; (12) other (open-ended question) 4. What category(ies) of “high value datasets” is, in you opinion, able to create added value for society or the economy? {(1)Geospatial data; (2) Earth observation and environment; (3) Meteorological; (4) Statistics; (5) Companies and company ownership; (6) Mobility} 5. To what extent do you think the current data catalogue of Latvia’s Open data portal corresponds to the needs of data users/ consumers? - 10-point Likert scale, where 1 – no data are useful, but 10 – fully correspond, i.e. all potentially valuable datasets are available 6. Which of the current data categories in Latvia’s open data portals, in you opinion, most corresponds to the “high value dataset”? - {(1)Foreign affairs; (2) business econonmy; (3) energy; (4) citizens and society; (5) education and sport; (6) culture; (7) regions and municipalities; (8) justice, internal affairs and security; (9) transports; (10) public administration; (11) health; (12) environment; (13) agriculture, food and forestry; (14) science and technologies} 7. Which of them form your TOP-3? - {(1)Foreign affairs; (2) business econonmy; (3) energy; (4) citizens and society; (5) education and sport; (6) culture; (7) regions and municipalities; (8) justice, internal affairs and security; (9) transports; (10) public administration; (11) health; (12) environment; (13) agriculture, food and forestry; (14) science and technologies} 8. How would you assess the value of the following data categories? 8.1. sensor data - 5-point Likert scale, where 1 – not needed to 5 – highly valuable 8.2. real-time data - 5-point Likert scale, where 1 – not needed to 5 – highly valuable 8.3. geospatial data - 5-point Likert scale, where 1 – not needed to 5 – highly valuable 9. What would be these datasets? I.e. what (sub)topic could these data be associated with? - open-ended question 10. Which of the data sets currently available could be valauble and useful for society and businesses? - open-ended question 11. Which of the data sets currently NOT available in Latvia’s open data portal could, in your opinion, be valauble and useful for society and businesses? - open-ended question 12. How did you define them? - {(1)Subjective opinion; (2) experience with data; (3) filtering out the most popular datasets, i.e. basing the on public opinion; (4) other (open-ended question)} 13. How high could be the value of these data sets value for you or your business? - 5-point Likert scale, where 1 – not valuable, 5 – highly valuable 14. Do you represent any company/ organization (are you working anywhere)? (if “yes”, please, fill out the survey twice, i.e. as an individual user AND a company representative) - {yes; no; I am an individual data user; other (open-ended)} 15. What industry/ sector does your company/ organization belong to? (if you do not work at the moment, please, choose the last option) - {Information and communication services; Financial and ansurance activities; Accommodation and catering services; Education; Real estate operations; Wholesale and retail trade; repair of motor vehicles and motorcycles; transport and storage; construction; water supply; waste water; waste management and recovery; electricity, gas supple, heating and air conditioning; manufacturing industry; mining and quarrying; agriculture, forestry and fisheries professional, scientific and technical services; operation of administrative and service services; public administration and defence; compulsory social insurance; health and social care; art, entertainment and recreation; activities of households as employers;; CSO/NGO; Iam not a representative of any company 16. To which category does your company/ organization belong to in terms of its size? - {small; medium; large; self-employeed; I am not a representative of any company} 17. What is the age group that you belong to? (if you are an individual user, not a company representative) - {11..15, 16..20, 21..25, 26..30, 31..35, 36..40, 41..45, 46+, “do not want to reveal”} 18. Please, indicate your education or a scientific degree that corresponds most to you? (if you are an individual user, not a company representative) - {master degree; bachelor’s degree; Dr. and/ or PhD; student (bachelor level); student (master level); doctoral candidate; pupil; do not want to reveal these data}

    Format of the file .xls, .csv (for the first spreadsheet only), .odt

    Licenses or restrictions CC-BY

  7. c

    ckanext-chhs_schema

    • catalog.civicdataecosystem.org
    Updated Jun 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). ckanext-chhs_schema [Dataset]. https://catalog.civicdataecosystem.org/dataset/ckanext-chhs_schema
    Explore at:
    Dataset updated
    Jun 4, 2025
    Description

    The CHHS Schema extension for CKAN provides a way to implement a custom dataset schema tailored for the California Health and Human Services (CHHS) agency. This extension leverages the ckanext-scheming library to define the specific fields, validation rules, and user interface elements required for CHHS datasets. By using this extension, organizations can ensure consistency and quality within their CHHS data catalog, improving data discoverability and usability. Key Features: Custom Dataset Schema Definition: Defines a specific dataset schema for CHHS datasets using ckanext-scheming. This includes specifying fields such as data source, maintainer details, geographic coverage, and other relevant metadata elements. Schema Enforcement: Enforces the defined schema during dataset creation and editing, ensuring that all required fields are populated and adhere to the specified validation rules. This helps maintain data quality and integrity. ckanext-scheming Integration: Seamlessly integrates with the ckanext-scheming extension to leverage its capabilities for defining and managing dataset schemas. This simplifies the process of creating and customizing the CHHS dataset schema. Enhanced Data Discoverability: Increases data discoverability and usability through consistent metadata application across the data catalog. Search and filtering becomes simpler and more accurate. Technical Integration: The CHHS Schema extension integrates with CKAN by implementing a plugin that relies on ckanext-scheming. To enable the extension, it must be added to the CKAN's ckan.plugins configuration setting alongside scheming_datasets. This activates the custom CHHS dataset schema within the CKAN instance. Benefits & Impact: Implementing the CHHS Schema extension ensures that datasets within a CKAN instance adhere to a standardized schema, greatly improving data consistency and reliability. The main benefit of this is improved data governance, data search & discovery, and ensuring that any external or internal organization using the datasets understand immediately what the data represents.

  8. d

    Cloud amount/frequency, NITRATE and other data from AIRCRAFT, USS DE...

    • catalog.data.gov
    Updated Jun 1, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (Point of Contact) (2025). Cloud amount/frequency, NITRATE and other data from AIRCRAFT, USS DE STEIGUER (AGOR 12) and other platforms in the North Pacific Ocean from 1985-10-31 to 1986-08-05 (NCEI Accession 8700017) [Dataset]. https://catalog.data.gov/dataset/cloud-amount-frequency-nitrate-and-other-data-from-aircraft-uss-de-steiguer-agor-12-and-other-p
    Explore at:
    Dataset updated
    Jun 1, 2025
    Dataset provided by
    (Point of Contact)
    Area covered
    Pacific Ocean
    Description

    Data has been processed by NODC to the NODC standard Bathythermograph (XBT Aircraft) (C118), Bathythermograph (XBT) (C116), and High-Resolution CTD/STD (F022) formats. The C116/C118 format contains temperature-depth profile data obtained using expendable bathythermograph (XBT) instruments. Cruise information, position, date and time were reported for each observation. The data record was comprised of pairs of temperature-depth values. Unlike the MBT Data File, in which temperature values were recorded at uniform 5 m intervals, the XBT data files contained temperature values at non-uniform depths. These depths were recorded at the minimum number of points ("inflection points") required to accurately define the temperature curve. Standard XBTs can obtain profiles to depths of either 450 or 760 m. With special instruments, measurements can be obtained to 1830 m. Prior to July 1994, XBT data were routinely processed to one of these standard types. XBT data are now processed and loaded directly in to the NODC Ocean Profile Data Base (OPDB). Historic data from these two data types were loaded into the OPDB. The C116/C118 format contains temperature-depth profile data obtained using expendable bathythermograph (XBT) instruments. Cruise information, position, date and time were reported for each observation. The data record was comprised of pairs of temperature-depth values. Unlike the MBT Data File, in which temperature values were recorded at uniform 5 m intervals, the XBT data files contained temperature values at non-uniform depths. These depths were recorded at the minimum number of points ("inflection points") required to accurately define the temperature curve. Standard XBTs can obtain profiles to depths of either 450 or 760 m. With special instruments, measurements can be obtained to 1830 m. Prior to July 1994, XBT data were routinely processed to one of these standard types. XBT data are now processed and loaded directly in to the NODC Ocean Profile Data Base (OPDB). Historic data from these two data types were loaded into the OPDB. The F022 format contains high-resolution data collected using CTD (conductivity-temperature-depth) and STD (salinity-temperature-depth) instruments. As they are lowered and raised in the oceans, these electronic devices provide nearly continuous profiles of temperature, salinity, and other parameters. Data values may be subject to averaging or filtering or obtained by interpolation and may be reported at depth intervals as fine as 1m. Cruise and instrument information, position, date, time and sampling interval are reported for each station. Environmental data at the time of the cast (meteorological and sea surface conditions) may also be reported. The data record comprises values of temperature, salinity or conductivity, density (computed sigma-t), and possibly dissolved oxygen or transmissivity at specified depth or pressure levels. Data may be reported at either equally or unequally spaced depth or pressure intervals. A text record is available for comments.

  9. National Forest and Sparse Woody Vegetation Data (Version 7.0 - 2022...

    • researchdata.edu.au
    Updated Jun 5, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Australian Government Department of Climate Change, Energy, the Environment and Water (2023). National Forest and Sparse Woody Vegetation Data (Version 7.0 - 2022 Release) [Dataset]. https://researchdata.edu.au/national-forest-sparse-2022-release/2996089
    Explore at:
    Dataset updated
    Jun 5, 2023
    Dataset provided by
    Data.govhttps://data.gov/
    Authors
    Australian Government Department of Climate Change, Energy, the Environment and Water
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Description

    Landsat satellite imagery is used to derive woody vegetation extent products that discriminate between forest, sparse woody and non-woody land cover across a time series from 1988 to 2022. A forest is defined as woody vegetation with a minimum 20 per cent canopy cover, at least 2 metres high and a minimum area of 0.2 hectares. Note that this product is not filtered to the 0.2ha criteria for forest to allow for flexibility in different use cases. Filtering to remove areas less than 0.2ha is undertaken in downstream processing for the purposes of Australia's National Inventory Reports. Sparse woody is defined as woody vegetation with a canopy cover between 5-19 per cent.\r \r The three-class classification (forest, sparse woody and non-woody) supersedes the two-class classification (forest and non-forest) from 2016. The new classification is produced using the same approach in terms of time series processing (conditional probability networks) as the two-class method, to detect woody vegetation cover. The three-class algorithm better encompasses the different types of woody vegetation across the Australian landscape.\r \r Unlike previous versions of the National Forest and Sparse Woody Vegetation data releases where 35 tiles have been released concurrently as part of the product, only the 25 southern tiles were supplied in the initial v7.0 release in June 2023. The 10 northern tiles have been released in July 2024 as v7.1 as a supplement to the initial product release to complete the standard 35 tiles. Please see the National Forest and Sparse Woody Vegetation data metadata pdf (Version 7.1 - 2022 release) for more information.

  10. o

    QASPER: NLP Questions and Evidence

    • opendatabay.com
    .undefined
    Updated Jun 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Datasimple (2025). QASPER: NLP Questions and Evidence [Dataset]. https://www.opendatabay.com/data/ai-ml/c030902d-7b02-48a2-b32f-8f7140dd1de7
    Explore at:
    .undefinedAvailable download formats
    Dataset updated
    Jun 22, 2025
    Dataset authored and provided by
    Datasimple
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Area covered
    Data Science and Analytics
    Description

    QASPER: NLP Questions and Evidence Discovering Answers with Expertise By Huggingface Hub [source]

    About this dataset QASPER is an incredible collection of over 5,000 questions and answers on a vast range of Natural Language Processing (NLP) papers -- all crowdsourced from experienced NLP practitioners. Each question in the dataset is written based only on the titles and abstracts of the corresponding paper, providing an insight into how the experts understood and parsed various materials. The answers to each query have been expertly enriched by evidence taken directly from the full text of each paper. Moreover, QASPER comes with carefully crafted fields that contain relevant information including ‘qas’ – questions and answers; ‘evidence’ – evidence provided for answering questions; title; abstract; figures_and_tables, and full_text. All this adds up to create a remarkable dataset for researchers looking to gain insights into how practitioners interpret NLP topics while providing effective validation when it comes to finding clear-cut solutions to problems encountered in existing literature

    More Datasets For more datasets, click here.

    Featured Notebooks 🚨 Your notebook can be here! 🚨! How to use the dataset This guide will provide instructions on how to use the QASPER dataset of Natural Language Processing (NLP) questions and evidence. The QASPER dataset contains 5,049 questions over 1,585 papers that has been crowdsourced by NLP practitioners. To get the most out of this dataset we will show you how to access the questions and evidence, as well as provide tips for getting started.

    Step 1: Accessing the Dataset To access the data you can download it from Kaggle's website or through a code version control system like Github. Once downloaded, you will find five files in .csv format; two test data sets (test.csv and validation.csv), two train data sets (train-v2-0_lessons_only_.csv and trainv2-0_unsplit.csv) as well as one figure data set (figures_and_tables_.json). Each .csv file contains different datasets with columns representing titles, abstracts, full texts and Q&A fields with evidence for each paper mentioned in each row of each file respectively

    **Step 2: Analyzing Your Data Sets ** Now would be a good time to explore your datasets using basic descriptive statistics or more advanced predictive analytics such as logistic regression or naive bayes models depending on what kind of analysis you would like to undertake with this dataset You can start simple by summarizing some basic crosstabs between any two variables comprise your dataset; titles abstracts etc.). As an example try correlating title lengths with certain number of words in their corresponding abstracts then check if there is anything worth investigating further

    **Step 3: Define Your Research Questions & Perform Further Analysis ** Once satisfied with your initial exploration it is time to dig deeper into the underlying QR relationship among different variables comprising your main documents One way would be using text mining technologies such as topic modeling machine learning techniques or even automated processes that may help summarize any underlying patterns Yet another approach could involve filtering terms that are relevant per specific research hypothesis then process such terms via web crawlers search engines document similarity algorithms etc

    Finally once all relevant parameters are defined analyzed performed searched it would make sense to draw preliminary connsusison linking them back together before conducting replicable tests ensuring reproducible results

    Research Ideas Developing AI models to automatically generate questions and answers from paper titles and abstracts. Enhancing machine learning algorithms by combining the answers with the evidence provided in the dataset to find relationships between papers. Creating online forums for NLP practitioners that uses questions from this dataset to spark discussion within the community

    License

    CC0

    Original Data Source: QASPER: NLP Questions and Evidence

  11. t

    BIOGRID CURATED DATA FOR PUBLICATION: Proteome-scale mapping of binding...

    • thebiogrid.org
    zip
    Updated Jan 1, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    BioGRID Project (2022). BIOGRID CURATED DATA FOR PUBLICATION: Proteome-scale mapping of binding sites in the unstructured regions of the human proteome. [Dataset]. https://thebiogrid.org/240122/publication/proteome-scale-mapping-of-binding-sites-in-the-unstructured-regions-of-the-human-proteome.html
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 1, 2022
    Dataset authored and provided by
    BioGRID Project
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    Protein-Protein, Genetic, and Chemical Interactions for Benz C (2022):Proteome-scale mapping of binding sites in the unstructured regions of the human proteome. curated by BioGRID (https://thebiogrid.org); ABSTRACT: Specific protein-protein interactions are central to all processes that underlie cell physiology. Numerous studies have together identified hundreds of thousands of human protein-protein interactions. However, many interactions remain to be discovered, and low affinity, conditional, and cell type-specific interactions are likely to be disproportionately underrepresented. Here, we describe an optimized proteomic peptide-phage display library that tiles all disordered regions of the human proteome and allows the screening of ~?1,000,000 overlapping peptides in a single binding assay. We define guidelines for processing, filtering, and ranking the results and provide PepTools, a toolkit to annotate the identified hits. We uncovered >2,000 interaction pairs for 35 known short linear motif (SLiM)-binding domains and confirmed the quality of the produced data by complementary biophysical or cell-based assays. Finally, we show how the amino acid resolution-binding site information can be used to pinpoint functionally important disease mutations and phosphorylation events in intrinsically disordered regions of the proteome. The optimized human disorderome library paired with PepTools represents a powerful pipeline for unbiased proteome-wide discovery of SLiM-based interactions.

  12. h

    HH-RLHF-Helpful-standard

    • huggingface.co
    Updated May 8, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    RLHFlow (2024). HH-RLHF-Helpful-standard [Dataset]. https://huggingface.co/datasets/RLHFlow/HH-RLHF-Helpful-standard
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 8, 2024
    Dataset authored and provided by
    RLHFlow
    Description

    We process the helpful subset of Anthropic-HH into the standard format. The filtering script is as follows.

    def filter_example(example):

    if len(example['chosen']) != len(example['rejected']):
      return False
    if len(example['chosen']) % 2 != 0:
      return False
    
    n_rounds = len(example['chosen'])
    for i in range(len(example['chosen'])):
      if example['chosen'][i]['role'] != ['user', 'assistant'][i % 2]:
        return False
      if… See the full description on the dataset page: https://huggingface.co/datasets/RLHFlow/HH-RLHF-Helpful-standard.
    
  13. n

    Data from: A systematic evaluation of normalization methods and probe...

    • data.niaid.nih.gov
    • zenodo.org
    • +1more
    zip
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    H. Welsh; C. M. P. F. Batalha; W. Li; K. L. Mpye; N. C. Souza-Pinto; M. S. Naslavsky; E. J. Parra (2023). A systematic evaluation of normalization methods and probe replicability using infinium EPIC methylation data [Dataset]. http://doi.org/10.5061/dryad.cnp5hqc7v
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    Universidade de São Paulo
    Hospital for Sick Children
    University of Toronto
    Authors
    H. Welsh; C. M. P. F. Batalha; W. Li; K. L. Mpye; N. C. Souza-Pinto; M. S. Naslavsky; E. J. Parra
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    Background The Infinium EPIC array measures the methylation status of > 850,000 CpG sites. The EPIC BeadChip uses a two-array design: Infinium Type I and Type II probes. These probe types exhibit different technical characteristics which may confound analyses. Numerous normalization and pre-processing methods have been developed to reduce probe type bias as well as other issues such as background and dye bias.
    Methods This study evaluates the performance of various normalization methods using 16 replicated samples and three metrics: absolute beta-value difference, overlap of non-replicated CpGs between replicate pairs, and effect on beta-value distributions. Additionally, we carried out Pearson’s correlation and intraclass correlation coefficient (ICC) analyses using both raw and SeSAMe 2 normalized data.
    Results The method we define as SeSAMe 2, which consists of the application of the regular SeSAMe pipeline with an additional round of QC, pOOBAH masking, was found to be the best-performing normalization method, while quantile-based methods were found to be the worst performing methods. Whole-array Pearson’s correlations were found to be high. However, in agreement with previous studies, a substantial proportion of the probes on the EPIC array showed poor reproducibility (ICC < 0.50). The majority of poor-performing probes have beta values close to either 0 or 1, and relatively low standard deviations. These results suggest that probe reliability is largely the result of limited biological variation rather than technical measurement variation. Importantly, normalizing the data with SeSAMe 2 dramatically improved ICC estimates, with the proportion of probes with ICC values > 0.50 increasing from 45.18% (raw data) to 61.35% (SeSAMe 2). Methods

    Study Participants and Samples

    The whole blood samples were obtained from the Health, Well-being and Aging (Saúde, Ben-estar e Envelhecimento, SABE) study cohort. SABE is a cohort of census-withdrawn elderly from the city of São Paulo, Brazil, followed up every five years since the year 2000, with DNA first collected in 2010. Samples from 24 elderly adults were collected at two time points for a total of 48 samples. The first time point is the 2010 collection wave, performed from 2010 to 2012, and the second time point was set in 2020 in a COVID-19 monitoring project (9±0.71 years apart). The 24 individuals were 67.41±5.52 years of age (mean ± standard deviation) at time point one; and 76.41±6.17 at time point two and comprised 13 men and 11 women.

    All individuals enrolled in the SABE cohort provided written consent, and the ethic protocols were approved by local and national institutional review boards COEP/FSP/USP OF.COEP/23/10, CONEP 2044/2014, CEP HIAE 1263-10, University of Toronto RIS 39685.

    Blood Collection and Processing

    Genomic DNA was extracted from whole peripheral blood samples collected in EDTA tubes. DNA extraction and purification followed manufacturer’s recommended protocols, using Qiagen AutoPure LS kit with Gentra automated extraction (first time point) or manual extraction (second time point), due to discontinuation of the equipment but using the same commercial reagents. DNA was quantified using Nanodrop spectrometer and diluted to 50ng/uL. To assess the reproducibility of the EPIC array, we also obtained technical replicates for 16 out of the 48 samples, for a total of 64 samples submitted for further analyses. Whole Genome Sequencing data is also available for the samples described above.

    Characterization of DNA Methylation using the EPIC array

    Approximately 1,000ng of human genomic DNA was used for bisulphite conversion. Methylation status was evaluated using the MethylationEPIC array at The Centre for Applied Genomics (TCAG, Hospital for Sick Children, Toronto, Ontario, Canada), following protocols recommended by Illumina (San Diego, California, USA).

    Processing and Analysis of DNA Methylation Data

    The R/Bioconductor packages Meffil (version 1.1.0), RnBeads (version 2.6.0), minfi (version 1.34.0) and wateRmelon (version 1.32.0) were used to import, process and perform quality control (QC) analyses on the methylation data. Starting with the 64 samples, we first used Meffil to infer the sex of the 64 samples and compared the inferred sex to reported sex. Utilizing the 59 SNP probes that are available as part of the EPIC array, we calculated concordance between the methylation intensities of the samples and the corresponding genotype calls extracted from their WGS data. We then performed comprehensive sample-level and probe-level QC using the RnBeads QC pipeline. Specifically, we (1) removed probes if their target sequences overlap with a SNP at any base, (2) removed known cross-reactive probes (3) used the iterative Greedycut algorithm to filter out samples and probes, using a detection p-value threshold of 0.01 and (4) removed probes if more than 5% of the samples having a missing value. Since RnBeads does not have a function to perform probe filtering based on bead number, we used the wateRmelon package to extract bead numbers from the IDAT files and calculated the proportion of samples with bead number < 3. Probes with more than 5% of samples having low bead number (< 3) were removed. For the comparison of normalization methods, we also computed detection p-values using out-of-band probes empirical distribution with the pOOBAH() function in the SeSAMe (version 1.14.2) R package, with a p-value threshold of 0.05, and the combine.neg parameter set to TRUE. In the scenario where pOOBAH filtering was carried out, it was done in parallel with the previously mentioned QC steps, and the resulting probes flagged in both analyses were combined and removed from the data.

    Normalization Methods Evaluated

    The normalization methods compared in this study were implemented using different R/Bioconductor packages and are summarized in Figure 1. All data was read into R workspace as RG Channel Sets using minfi’s read.metharray.exp() function. One sample that was flagged during QC was removed, and further normalization steps were carried out in the remaining set of 63 samples. Prior to all normalizations with minfi, probes that did not pass QC were removed. Noob, SWAN, Quantile, Funnorm and Illumina normalizations were implemented using minfi. BMIQ normalization was implemented with ChAMP (version 2.26.0), using as input Raw data produced by minfi’s preprocessRaw() function. In the combination of Noob with BMIQ (Noob+BMIQ), BMIQ normalization was carried out using as input minfi’s Noob normalized data. Noob normalization was also implemented with SeSAMe, using a nonlinear dye bias correction. For SeSAMe normalization, two scenarios were tested. For both, the inputs were unmasked SigDF Sets converted from minfi’s RG Channel Sets. In the first, which we call “SeSAMe 1”, SeSAMe’s pOOBAH masking was not executed, and the only probes filtered out of the dataset prior to normalization were the ones that did not pass QC in the previous analyses. In the second scenario, which we call “SeSAMe 2”, pOOBAH masking was carried out in the unfiltered dataset, and masked probes were removed. This removal was followed by further removal of probes that did not pass previous QC, and that had not been removed by pOOBAH. Therefore, SeSAMe 2 has two rounds of probe removal. Noob normalization with nonlinear dye bias correction was then carried out in the filtered dataset. Methods were then compared by subsetting the 16 replicated samples and evaluating the effects that the different normalization methods had in the absolute difference of beta values (|β|) between replicated samples.

  14. f

    Geographic variation in urbanization filter effects on birds in China

    • figshare.com
    xlsx
    Updated Feb 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sidan Lin; Wei Liang (2025). Geographic variation in urbanization filter effects on birds in China [Dataset]. http://doi.org/10.6084/m9.figshare.28341962.v1
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Feb 7, 2025
    Dataset provided by
    figshare
    Authors
    Sidan Lin; Wei Liang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    China
    Description

    We conducted field surveys in the capital cities of all 34 provincial administrative regions in China (Figure 1). During the bird breeding season of 2023 (June-August), bird surveys were conducted using a standard point-count method (Bibby et al., 1992). Survey points were set in well-vegetated urban areas such as schools, parks, resident community and road. Each survey point was monitored for 10 min, during which all avian species observed or heard within a 50 m radius were recorded. To reduce the probability of duplicate records, adjacent survey points were spaced at least 150 m apart. Each survey point was surveyed only once, with at least 40 survey points set in each city (some cities like Macau and Hong Kong had only 30 points due to their smaller size). Geographic coordinates and altitude were recorded for each survey point. The point-count surveys were conducted by 29 surveyors, each with over 2 years of birdwatching experience and demonstrated the skill to accurately identify the bird species in their respective cities. In total, we recorded the distribution and abundance of 231 bird species across 1,364 survey points in the 34 capital cities (Table S1).To obtain the potential regional species pool for each province, we collected the distribution data of all bird species across the Chinese provinces (Zheng, 2023). We excluded species belonging to the orders Phoenicopteriformes, Phaethontiformes, Gaviiformes, Procellariiformes, Ciconiiformes, and Suliformes, as these species inhabit coastal wetlands or open oceans and are rarely recorded in urban areas. Additionally, we excluded nocturnal species belonging to the order Strigiformes because our field surveys were conducted during the day only. We aligned the taxonomic framework of the species pool based on the work of Jetz et al., (2012) for subsequent phylogenetic analyses. Ultimately, we obtained distribution data for 1,186 bird species across the 34 provincial administrative regions in China (Table S2).In the present study, we define urban utilizers as bird species recorded at more than one survey point during our field surveys. Other species from the regional species pool were defined as urban avoiders. This approach was devised to avoid the misclassification of rare species that could have been recorded accidentally, thus ensuring a more accurate distinction between urban utilizers and avoiders. We also conducted a sensitivity analysis, employing a higher threshold (n = 2 survey points) to examine whether it would yield different outcomes.Here, we conducted field surveys of breeding bird communities in the capital cities of 34 Chinese provinces and categorized species as either urban utilizers or avoiders based on their occurrence rate among regional species pools. Bayesian phylogenetic generalized linear mixed model was used to investigate potential trait associations between urban utilizers and urban avoiders. To investigate the relative effects (measured by the model coefficients) of urbanization filtering across different environmental gradients, we incorporated the model coefficients of each trait as the response variables with city-specific environmental characteristics as predictors to constructed linear mixed models. We found that urban utilizer birds exhibited greater behavioral innovation and broader habitat ranges than urban avoider birds. Additionally, we observed that the filtering effects of urbanization on different traits exhibited significant variations, showing gradients in latitude, altitude, and artificial night light intensity. This variation along geographic gradients could potentially elucidate the inconsistent findings reported in previous work on bird adaptation to urban environments. Our findings highlight the context-dependent nature of urbanization filtering effects and contribute to a better understanding of the impact of urbanization on urban bird communities.

  15. f

    Data from: Code and data.

    • plos.figshare.com
    zip
    Updated Jun 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Crispin H. V. Cooper; Kevin Fahey; Regan Jones (2025). Code and data. [Dataset]. http://doi.org/10.1371/journal.pone.0324507.s003
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 4, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Crispin H. V. Cooper; Kevin Fahey; Regan Jones
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Echo chambers are widely acknowledged as a feature of online discourse and current politics: a phenomenon arising when people selectively engage with like-minded others and are shielded from opposing ideas. Various studies have operationalized the concept through studying opinions, interactions, reinforcement or group identity. Echo chambers both feed and are fed by the false consensus effect, whereby people overestimate the degree to which others share their views, with algorithmic filtering of social media also a contributing factor. Although there is strong evidence that meta-opinions - that is, people’s perceptions of others’ opinions - often fail to reflect reality, no attempt has been made to explore the space of meta-opinions, or detect echo chambers within this space. We created a new, information-theoretic method for directly quantifying the information content of meta-opinions, allowing detailed exploratory analysis of their relationships with demographic factors and underlying opinions. In a gamified survey (presented as a quiz) of 476 UK respondents, we found both the liberal left, and also people at both extremes of the left/right scale, to have more accurate knowledge of others’ opinions. Surprisingly however, we found that meta-opinions, although displaying significant false consensus effects, were not divided into any strong clusters representative of echo chambers. We suggest that the metaphor of discrete echo chambers may be inappropriate for meta-opinions: while measures of meta-opinion accuracy and its influences can reveal echo chamber characteristics where other metrics confirm their presence, the presence or absence of meta-opinion clusters is not itself sufficient to define an echo chamber. We publish both data and analysis code as supplementary material.

  16. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
esri_en (2014). Filter (Mature) [Dataset]. https://data-salemva.opendata.arcgis.com/items/1bdcdf930b4345dfb4db10f795e0c726
Organization logo

Filter (Mature)

Explore at:
98 scholarly articles cite this dataset (View in Google Scholar)
Dataset updated
Jul 3, 2014
Dataset provided by
Esrihttp://esri.com/
Authors
esri_en
Description

Filter is a configurable app template that displays a map with an interactive filtered view of one or more feature layers. The application displays prompts and hints for attribute filter values which are used to locate specific features.Use CasesFilter displays an interactive dialog box for exploring the distribution of a single attribute or the relationship between different attributes. This is a good choice when you want to understand the distribution of different types of features within a layer, or create an experience where you can gain deeper insight into how the interaction of different variables affect the resulting map content.Configurable OptionsFilter can present a web map and be configured with the following options:Choose the web map used in the application.Provide a title and color theme. The default title is the web map name.Configure the ability for feature and location search.Define the filter experince and provide text to encourage user exploration of data by displaying additional values to choose as the filter text.Supported DevicesThis application is responsively designed to support use in browsers on desktops, mobile phones, and tablets.Data RequirementsRequires at least one layer with an interactive filter. See Apply Filters help topic for more details.Get Started This application can be created in the following ways:Click the Create a Web App button on this pageShare a map and choose to Create a Web AppOn the Content page, click Create - App - From Template Click the Download button to access the source code. Do this if you want to host the app on your own server and optionally customize it to add features or change styling.

Search
Clear search
Close search
Google apps
Main menu