Facebook
TwitterDRAKO is a Mobile Location Audience Targeting provider with a programmatic trading desk specialising in geolocation analytics and programmatic advertising. Through our customised approach, we offer business and consumer insights as well as addressable audiences for advertising.
Mobile Location Data can be meaningfully transformed into Audience Targeting when used in conjunction with other dataset. Our expansive POI Data allows us to segment users by visitation to major brands and retailers as well as categorizes them into syndicated segments. Beyond POI visits, our proprietary Home Location Model determines residents of geographic areas such as Designated Market Areas, Counties, or States. Relatedly, our Home Location Model also fuels our Geodemographic Census Data segments as we are able to determine residents of the smallest census units. Additionally, we also have audiences of: ticketed event and venue visitors; survey data; and retail data.
All of our Audience Targeting is 100% deterministic in that it only includes high-quality, real visits to locations as defined by a POIs satellite imagery buildings contour. We never use a radius when building an audience unless requested. We have a horizontal accuracy of 5m.
Additionally, we can always cross reference your audience targeting with our syndicated segments:
Overview of our Syndicated Audience Data Segments: - Brand/POI segments (specific named stores and locations) - Categories (behavioural segments - revealed habits) - Census demographic segments (HH income, race, religion, age, family structure, language, etc.,) - Events segments (ticketed live events, conferences, and seminars) - Resident segments (State/province, CMAs, DMAs, city, county, sub-county) - Political segments (Canadian Federal and Provincial, US Congressional Upper and Lower House, US States, City elections, etc.,) - Survey Data (Psychosocial/Demographic survey data) - Retail Data (Receipt/transaction data)
All of our syndicated segments are customizable. That means you can limit them to people within a certain geography, remove employees, include only the most frequent visitors, define your own custom lookback, or extend our audiences using our Home, Work, and Social Extensions.
In addition to our syndicated segments, we’re also able to run custom queries return to you all the Mobile Ad IDs (MAIDs) seen at in a specific location (address; latitude and longitude; or WKT84 Polygon) or in your defined geographic area of interest (political districts, DMAs, Zip Codes, etc.,)
Beyond just returning all the MAIDs seen within a geofence, we are also able to offer additional customizable advantages: - Average precision between 5 and 15 meters - CRM list activation + extension - Extend beyond Mobile Location Data (MAIDs) with our device graph - Filter by frequency of visitations - Home and Work targeting (retrieve only employees or residents of an address) - Home extensions (devices that reside in the same dwelling from your seed geofence) - Rooftop level address geofencing precision (no radius used EVER unless user specified) - Social extensions (devices in the same social circle as users in your seed geofence) - Turn analytics into addressable audiences - Work extensions (coworkers of users in your seed geofence)
Data Compliance: All of our Audience Targeting Data is fully CCPA compliant and 100% sourced from SDKs (Software Development Kits), the most reliable and consistent mobile data stream with end user consent available with only a 4-5 day delay. This means that our location and device ID data comes from partnerships with over 1,500+ mobile apps. This data comes with an associated location which is how we are able to segment using geofences.
Data Quality: In addition to partnering with trusted SDKs, DRAKO has additional screening methods to ensure that our mobile location data is consistent and reliable. This includes data harmonization and quality scoring from all of our partners in order to disregard MAIDs with a low quality score.
Facebook
TwitterThe Datacovid COVID19 barometer, through a partnership with IPSOS, collects accurate data to inform French people’s behaviours and their impacts on the dynamics of the epidemic during the COVID 19 phase, in order to offer them in open-data to the scientific community, public administrations, businesses and all citizens. The challenge is to respond quickly to the information gap in the epidemic management systems, both in the current lockdown period and in the subsequent period. The Datacovid COVID-19 barometer consists of three categories of information on an unbiased panel representative of the population: 1. information on symptoms of infection and medical history; 2. behavioural parameters on the monitoring of containment rules and compliance with barrier gestures; 3. sociodemographic, economic and psychological characteristics of the respondents. CONDITIONS FOR THE USE OF DATASETS FROM THE COVID-19 BAROMETER The data sets published by datacovid.org on its website datacovid.org/data, datacovid.org/api and on data.gouv.fr, come from the Covid-19 Barometer operated by IPSOS in partnership with datacovid.org, a non-profit association governed by the French law of 1901. These datasets: — are governed by French law and by the terms of use of the datacovid.org website, — are published on the Internet for a non-profit, scientific and citizen purpose to fill the information gap on the systems of societal management of epidemics, — have previously been redacted from any data enabling the identification of a person who responded to the Covid-19 Barometer, — are open, i.e. they can be consulted, used and shared by all, in particular for research purposes, including scientific, historical and statistical purposes, — shall not give rise to commercial use, except that the results derived from those datasets, directly or indirectly, may also be opened, within the meaning defined above and brought to the attention of datacovid.org to ensure their opening, — must not be reconciled with other datasets or other resources under conditions that would allow a third party, by correlation, inference or by any means whatsoever, to identify a person who responded to the Covid-19 Barometer. As a result, — any use of these data sets and of all or part of their constituent elements which does not comply with each of the conditions listed above is prohibited, in particular: — any commercial use not open within the meaning defined above is prohibited and would be liable to prosecution and civil, administrative and criminal penalties in accordance with the French regulations in force, — any correlation or inference between the constituent elements of these datasets and other data sources, which would allow a third party to identify a person who responded to the Covid-19 Barometer, would be liable for such a third party in respect of datacovid.org and any person concerned and would be liable to civil, administrative and criminal proceedings and penalties under the French legislation in force. By downloading or making available to a third party data from the Covid-19 Barometer, I undertake to comply with the above objectives and the terms of use of the datasets published by datacovid.org. If I have any questions or doubts, I ask contact@datacovid.org while remaining responsible for my actions or those of my attendants and service providers. NB: Traffic data relating to access to the datacovid.org site are processed by datacovid.org and its service providers in order to measure their attendance and to ensure the availability, integrity and security of the site and its contents, under conditions and according to retention periods in accordance with French regulations. Any natural person who proves his identity may write to contact@datacovid.org to exercise the rights guaranteed to him by the French and European regulations in force relating to the protection of personal data and privacy, in particular the rights of access, opposition or deletion of personal data concerning him or her processed by datacovid.org.
Facebook
Twitter‘DfE external data shares’ includes:
DfE also provides external access to data under https://www.legislation.gov.uk/ukpga/2017/30/section/64/enacted">Section 64, Chapter 5, of the Digital Economy Act 2017. Details of these data shares can be found in the https://uksa.statisticsauthority.gov.uk/digitaleconomyact-research-statistics/better-useofdata-for-research-information-for-researchers/list-of-accredited-researchers-and-research-projects-under-the-research-strand-of-the-digital-economy-act/">UK Statistics Authority list of accredited projects.
Previous external data shares can be viewed in the https://webarchive.nationalarchives.gov.uk/ukgwa/timeline1/https://www.gov.uk/government/publications/dfe-external-data-shares">National Archives.
The data in the archived documents may not match DfE’s internal data request records due to definitions or business rules changing following process improvements.
Facebook
TwitterAI in Consumer Decision-Making: Global Zero-Party Dataset
This dataset captures how consumers around the world are using AI tools like ChatGPT, Perplexity, Gemini, Claude, and Copilot to guide their purchase decisions. It spans multiple product categories, demographics, and geographies, mapping the emerging role of AI as a decision-making companion across the consumer journey.
What Makes This Dataset Unique
Unlike datasets inferred from digital traces or modeled from third-party assumptions, this collection is built entirely on zero-party data: direct responses from consumers who voluntarily share their habits and preferences. That means the insights come straight from the people making the purchases, ensuring unmatched accuracy and relevance.
For FMCG leaders, retailers, and financial services strategists, this dataset provides the missing piece: visibility into how often consumers are letting AI shape their decisions, and where that influence is strongest.
Dataset Structure
Each record is enriched with: Product Category – from high-consideration items like electronics to daily staples such as groceries and snacks. AI Tool Used – identifying whether consumers turn to ChatGPT, Gemini, Perplexity, Claude, or Copilot. Influence Level – the percentage of consumers in a given context who rely on AI to guide their choices. Demographics – generational breakdowns from Gen Z through Boomers. Geographic Detail – city- and country-level coverage across Africa, LATAM, Asia, Europe, and North America.
This structure allows filtering and comparison across categories, age groups, and markets, giving users a multidimensional view of AI’s impact on purchasing.
Why It Matters
AI has become a trusted voice in consumers’ daily lives. From meal planning to product comparisons, many people now consult AI before making a purchase—often without realizing how much it shapes the options they consider. For brands, this means that the path to purchase increasingly runs through an AI filter.
This dataset provides a comprehensive view of that hidden step in the consumer journey, enabling decision-makers to quantify: How much AI shapes consumer thinking before they even reach the shelf or checkout. Which product categories are most influenced by AI consultation. How adoption varies by geography and generation. Which AI platforms are most commonly trusted by consumers.
Opportunities for Business Leaders
FMCG & Retail Brands: Understand where AI-driven decision-making is already reshaping category competition. Marketers: Identify demographic segments most likely to consult AI, enabling targeted strategies. Retailers: Align assortments and promotions with the purchase patterns influenced by AI queries. Investors & Innovators: Gauge market readiness for AI-integrated commerce solutions.
The dataset doesn’t just describe what’s happening—it opens doors to the “so what” questions that define strategy. Which categories are becoming algorithm-driven? Which markets are shifting fastest? Where is the opportunity to get ahead of competitors in an AI-shaped funnel?
Why Now
Consumer AI adoption is no longer a forecast; it is a daily behavior. Just as search engines once rewrote the rules of marketing, conversational AI is quietly rewriting how consumers decide what to buy. This dataset offers an early, detailed view into that change, giving brands the ability to act while competitors are still guessing.
What You Get
Users gain: A global, city-level view of AI adoption in consumer decision-making. Cross-category comparability to see where AI influence is strongest and weakest. Generational breakdowns that show how adoption differs between younger and older cohorts. AI platform analysis, highlighting how tool preferences vary by region and category. Every row is powered by zero-party input, ensuring the insights reflect actual consumer behavior—not modeled assumptions.
How It’s Used
Leverage this data to:
Validate strategies before entering new markets or categories. Benchmark competitors on AI readiness and influence. Identify growth opportunities in categories where AI-driven recommendations are rapidly shaping decisions. Anticipate risks where brand visibility could be disrupted by algorithmic mediation.
Core Insights
The full dataset reveals: Surprising adoption curves across categories where AI wasn’t expected to play a role. Geographic pockets where AI has already become a standard step in purchase decisions. Demographic contrasts showing who trusts AI most—and where skepticism still holds. Clear differences between AI platforms and the consumer profiles most drawn to each.
These patterns are not visible in traditional retail data, sales reports, or survey summaries. They are only captured here, directly from the consumers themselves.
Summary
Winning in FMCG and retail today means more than getting on shelves, capturing price points, or running promotions. It means understanding the invisible algorithms consumers are ...
Facebook
TwitterYELP DATASET TERMS OF USE Last Updated: February 16, 2021 This document (“Data Agreement”) governs the terms under which you may access and use the data that Yelp makes available for download through this website (or made available by other means) solely for academic or non-commercial purposes (the “Data”). Yelp Terms of Service: By accessing or using the Data, you agree to be bound by the Data Agreement and represent that the contact information you provide to Yelp is correct. If you access or use the Data on behalf of a university, school, or other entity, you represent that you have authority to bind such entity and its affiliates to the Data Agreement and that it is fully binding upon them. In such a case, the term “you” and “your” will refer to such an entity and its affiliates. If you do not have authority, or if you do not agree with the terms of the Data Agreement, you may not access or use the Data. You should read and keep a copy of each component of the Data Agreement for your records. In the event of a conflict among them, the terms of this document will control. 1. Purpose The Data is made available by Yelp Inc. (“Yelp”) to enable you to access valuable local information to develop an academic project as part of an ongoing course of study or for non-commercial purposes. With this in mind, Yelp reserves the right to continually review and evaluate all uses of the Data provided under the Data Agreement. Under certain circumstances, Yelp may authorize limited commercial use under certain circumstances, for example, access and use by journalists to explore our data to generate ideas prior to formal data access requests from Yelp’s PR department. 2. Changes Yelp reserves the right to modify or revise the Data Agreement at any time. If the change is deemed to be material and it is foreseeable that such change could be adverse to your interests, Yelp will provide you notice of the change to this Data Agreement by sending you an email to the email you provided to Yelp. Your continued use of the Data after the notice of material change will constitute your acceptance of and agreement to such changes. IF YOU DO NOT WISH TO BE BOUND TO ANY NEW TERMS, YOU MUST TERMINATE THE DATA AGREEMENT BY IMMEDIATELY CEASING USE OF THE DATA AND DELETING IT FROM ANY SYSTEMS OR MEDIA. 3. License Subject to the terms set forth in the Data Agreement (specifically the restrictions set forth in Section 4 below), Yelp grants you a royalty-free, non-exclusive, revocable, non-sublicensable, non-transferable, fully paid-up right and license during the Term to use, access, and create derivative works of the Data in electronic form for solely for non-commercial use.. Non-commercial use means use of the Data by registered nonprofits, government, educational institutions, and think tanks which (a) is not undertaken for profit, or (b) is not intended to produce works, services, or data for commercial use. You may not use the Data for any other purpose without Yelp’s prior written consent. You acknowledge and agree that Yelp may request information about, review, audit, and/or monitor your use of the Data at any time in order to confirm compliance with the Data Agreement. Nothing herein shall be construed as a license to use Yelp’s registered trademarks or service marks, or any other Yelp branding. Prior to any public presentation or publication of the academic results or conclusions that involve the Data and/or the Yelp brand name, you must submit your findings to Yelp for review and approval, and Yelp will approve of the public release within five (5) business days of its submission to Yelp. 4. Restrictions You agree that you will not, and will not encourage, assist, or enable others to: A. display, perform, or distribute any of the Data, or use the Data to update or create your own business listing information for commercial purposes (i.e. you may not publicly display any of the Data to any third party, especially reviews and other user generated content, as this is a private data set challenge and not a license to compete with or disparage with Yelp); B. use the Data in connection with any commercial purpose; C. use the Data in any manner or for any purpose that may violate any law or regulation, or any right of any person including, but not limited to, intellectual property rights, rights of privacy and/or rights of personality, or which otherwise may be harmful (in Yelp's sole discretion) to Yelp, its providers, its suppliers, end users of this website, or your end users; D. use the Data on behalf of any third party without Yelp’s consent; E. create, redistribute or disclose any summary of, or metrics related to, the Data (e.g., the number of reviewed business included in the Data and other statistical analysis) to any third party or on any website or other electronic media not expressly covered by this Agreement or without Yelp’s ...
Facebook
Twitter
According to our latest research, the global Metrics Layer Platforms market size reached USD 1.38 billion in 2024, reflecting robust adoption across diverse industries. The market is experiencing a strong growth trajectory, with a CAGR of 19.7% projected from 2025 to 2033. By the end of 2033, the market is expected to attain a substantial value of USD 6.66 billion, driven by the increasing demand for unified data governance, real-time analytics, and streamlined business intelligence processes. This significant growth is primarily attributed to the rising complexity of data environments and the urgent need for consistent, scalable, and reliable metrics solutions across enterprises of all sizes.
A major growth factor propelling the Metrics Layer Platforms market is the exponential surge in data volumes and the proliferation of disparate data sources within modern organizations. As businesses expand their digital footprints, they generate vast quantities of structured and unstructured data from various channels, including cloud applications, IoT devices, and third-party platforms. The lack of a centralized metrics layer often leads to data silos, inconsistent reporting, and inefficiencies in decision-making. Metrics Layer Platforms address these challenges by offering a single source of truth for business metrics, enabling organizations to standardize definitions, improve data quality, and accelerate time-to-insight. The ability to harmonize data across departments and applications not only enhances operational efficiency but also empowers stakeholders with accurate, actionable intelligence critical for strategic planning and competitive differentiation.
Another key driver of market growth is the increasing adoption of cloud-based analytics and business intelligence solutions. As enterprises migrate their data infrastructure to the cloud, there is a growing need for platforms that can seamlessly integrate with multiple cloud services, data warehouses, and analytics tools. Metrics Layer Platforms play a pivotal role in this transition by providing a unified abstraction layer that decouples metric definitions from underlying data sources. This abstraction simplifies data management, reduces the burden on IT teams, and ensures that business users can access consistent metrics regardless of the tools or platforms they use. Moreover, the scalability and flexibility offered by cloud-native Metrics Layer Platforms make them particularly attractive for organizations seeking to future-proof their analytics stack and support rapid innovation.
The market is also benefiting from the rising emphasis on data governance, compliance, and regulatory requirements. With stricter data privacy laws and an increased focus on transparency, organizations are under pressure to ensure that their data analytics processes are auditable, reliable, and aligned with industry standards. Metrics Layer Platforms facilitate robust governance by centralizing metric definitions, tracking data lineage, and providing comprehensive audit trails. This not only mitigates compliance risks but also fosters a culture of trust and accountability within the organization. As a result, sectors such as BFSI, healthcare, and retail, which operate in highly regulated environments, are accelerating their investments in Metrics Layer Platforms to safeguard sensitive data and maintain regulatory compliance.
From a regional perspective, North America continues to dominate the Metrics Layer Platforms market, fueled by the presence of leading technology vendors, advanced digital infrastructure, and a mature analytics ecosystem. The region accounted for the largest revenue share in 2024, with the United States serving as a major growth engine. Europe follows closely, driven by stringent data protection regulations and a strong focus on data-driven decision-making. Meanwhile, the Asia Pacific region is witnessing the fastest growth, supported by rapid digital transformation initiatives, expanding cloud adoption, and increasing investments in big data analytics across emerging economies such as China, India, and Southeast Asia.
Facebook
Twitterhttps://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
According to our latest research, the global Telco Edge Data Broker market size in 2024 stands at USD 1.42 billion, with a robust CAGR of 21.7% anticipated from 2025 to 2033. This rapid expansion is projected to drive the market to USD 10.37 billion by 2033. The primary growth factor for this market is the surging demand for real-time data processing and analytics at the edge, which is transforming how telecom operators and enterprises leverage network data to optimize operations, monetize assets, and deliver next-generation digital services.
One of the most significant drivers propelling the Telco Edge Data Broker market is the exponential increase in data traffic generated by emerging technologies such as 5G, IoT, and edge computing. Telecom operators are under immense pressure to handle vast volumes of data generated at the network edge, where latency-sensitive applications require real-time processing and insights. By deploying edge data broker platforms, telcos can efficiently aggregate, process, and distribute data closer to the source, minimizing latency and enhancing user experiences. Additionally, these solutions enable telecom operators to unlock new revenue streams by securely sharing and monetizing data with third-party partners, application developers, and enterprises, thereby maximizing the value of their network infrastructure investments.
Another key growth factor is the increasing adoption of cloud-native architectures and virtualization across the telecom ecosystem. As operators transition from legacy infrastructure to software-defined networks and virtualized network functions, the need for agile, scalable, and interoperable data broker solutions becomes paramount. Edge data brokers facilitate seamless integration between on-premises and cloud environments, empowering telcos to deploy flexible data management frameworks that support rapid service innovation. This trend is further accelerated by the proliferation of multi-access edge computing (MEC), which enables distributed data processing and fosters a new wave of low-latency, high-bandwidth applications such as augmented reality, autonomous vehicles, and smart cities.
Furthermore, regulatory and privacy considerations are shaping the evolution of the Telco Edge Data Broker market. With increasing scrutiny on data sovereignty, compliance, and user consent, telecom operators are prioritizing solutions that provide robust data governance, security, and policy enforcement mechanisms. Edge data brokers play a critical role in ensuring data is managed and exchanged in accordance with regional and industry-specific regulations, such as GDPR in Europe or CCPA in California. This compliance-driven approach not only mitigates risk but also builds trust with enterprise customers and end-users, positioning telcos as reliable custodians of sensitive data in a rapidly digitizing world.
Regionally, North America and Asia Pacific are at the forefront of Telco Edge Data Broker adoption, driven by aggressive 5G rollouts, a vibrant ecosystem of technology innovators, and substantial investments in edge infrastructure. Europe is also witnessing significant growth, fueled by regulatory initiatives and cross-industry collaborations aimed at fostering data sharing and digital transformation. In contrast, markets in Latin America and the Middle East & Africa are gradually catching up, with telcos exploring edge data brokering as a means to bridge connectivity gaps and unlock new business opportunities. This global momentum underscores the strategic importance of edge data brokers in shaping the future of telecommunications and digital services worldwide.
The Telco Edge Data Broker market by component is segmented into Platform and Services, each playing a distinct role in the ecosystem. The platform segment encompasses the core software and hardware solutions that enable data collection, aggregation, processing, and distribution at the network edge. These platforms are engineered to handle massive data volumes, support real-time analytics, and provide seamless integration with existing network infrastructure and third-party applications. As telcos continue to modernize their networks, the demand for advanced data broker platforms is surging, driven by the need for high performance, scalability, and interoperability across heterogeneous environments.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
The dataset contains the details of Patent Litigation Cases in the United States from 2000 to 2021. The team collected the litigation data in two phases. The first phase looked at data from 2010, specifically within Texas's Western and Eastern Districts. Unified Patent's Portal includes litigation data that each plaintiff has been marked as NPE (Patent Assertion Entity), NPE (Small Company), or NPE (Individual).
Using the definitions, Unified first focused on identifying what NPEs were aggregators and then if they involved third-party financing. NPE aggregators were defined as NPEs with more than one affiliated subsidiary bringing patent litigation. An example of this would be IP Edge and the various limited liability companies underneath IP Edge's control that have brought numerous litigations against operating companies. Third-party financing was defined as evidence of any third party with a financial interest other than the assertors.
With a narrow focus on the Western and Eastern District of Texas, Unified then used several public databases, such as Edgar, USPTO Assignment Records, the NPE Stanford Database, press releases, and its database of NPEs to identify any aggregator and any third-party financial interest, as well as various secretary of state corporate filings or court-ordered disclosures. After these two districts were identified, Unified expanded the data to cover the top five most litigious venues for patents, including the Western and Eastern Districts of Texas, Delaware, and the North and Central Districts of California. (On average, over the past five years, these districts have seen about 70% of all patent litigation.) Once that was completed, that dataset was then expanded to include all jurisdictions from 2010 and on.
The final step was to complete the data set from 2000 to 2009. The team followed a similar data collection process using Lex Machina, the NPE Stanford Database, and Unified's Portal. Unified identified all of the litigation known to be NPE-related. Using the top five jurisdictions' aggregation and financing data, aggregator entities—such as Intellectual Ventures—were identified using the same methodology. The current dataset covers 2000-2021, determines who is an NPE, notes which NPEs are aggregators, and identifies which aggregators are known to have third-party financing.
Note: there are currently no reporting requirements Federally, at the state level, or in the courts to publicly disclose the financing details of nonpublic entities. Thus, any data analysis of which litigations are funded or financed is incomplete, as many of these arrangements are closely held, private, and unknown even to the courts and the parties to the actions. This data set describes the minimum known amount of third-party-funded patent litigation. It is necessarily underinclusive of all nonpublic deals for which there is no available evidence or insight. For further generalized industry information on the size and scope of litigation funding for patent litigations, private sources often report on the size and scope of the burgeoning industry in the aggregate. For example, see Westfleet Advisor's 2021 Litigation Finance Report, available at https://www.westfleetadvisors.com/publications/2021-litigation-finance-report/.
Facebook
Twitterhttps://data.linz.govt.nz/license/attribution-4-0-international/https://data.linz.govt.nz/license/attribution-4-0-international/
This index enables you to identify freely available digital bathymetric surface models owned by LINZ. This data provides a 3-dimensional model of the surface of the seafloor.
These surface models have been created by LINZ from publically funded single- or multi-beam data collected in the New Zealand coastal area since early 1998. The polygons in the index show the extent of these gridded data models, and include descriptive information, such as the age and quality of the data.
Please refer to the LINZ Bathymetric Index Data Dictionary for further information about the attributes of this dataset, and formats in which the data is available.
How to order the data: Requests for the models should be sent to hydro@linz.govt.nz with “Hydro Bathy Data” in the subject line. Requests must, as a minimum, specify the id and surf_name of the models of interest and the data format (see the options in Section 1.4 of the Bathymetric Data Dictionary).
LINZ has also created 3-dimensional bathymetric surface models for the New Zealand coastal area from data provided by third parties, which is subject to different licencing terms and conditions. View our NZ Bathymetric Surface Model Index – Third Party dataset to request this data.
Facebook
TwitterAttribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
This dataset and its metadata statement were supplied to the Bioregional Assessment Programme by a third party and are presented here as originally supplied.
Potential Groundwater Dependent Ecosystems (GDE) are ecosystems identified within the landscape as likely to be at least partly dependent on groundwater. State-wide screening analysis was performed to identify locations of potential terrestrial GDEs, including wetland areas. The GDE mapping was developed utilising satellite remote sensing data, geological data and groundwater monitoring data in a GIS overlay model. Validation of the model through field assessment has not been performed. The method has been applied for all of Victoria and is the first step in identifying potential groundwater dependent ecosystems that may be threatened by activities such as drainage and groundwater pumping. The dataset specifically covers the West Gippsland Catchment Management Authority (CMA) area.
The method used in this research is based upon the characteristics of a potential GDE containing area as one that:
Has access to groundwater. By definition a GDE must have access to groundwater. For GDE occurrences associated with wetlands and river systems the water table will be at surface with a zone of capillary extension. In the case of terrestrial GDE's (outside of wetlands and river systems), these are dependent on the interaction between depth to water table and the rooting depth of the vegetation community.
Has summer (dry period) use of water. Due to the physics of root water uptake, GDEs will use groundwater when other sources are no longer available; this is generally in summer for the Victorian climate. The ability to use groundwater during dry periods creates a contrasting growth pattern with surrounding landscapes where growth has ceased.
Has consistent growth patterns, vegetation that uses water all year round will have perennial growth patterns.
Has growth patterns similar to verified GDEs.
The current mapping does not indicate the degree of groundwater dependence, only locations in the landscape of potential groundwater dependent ecosystems. This dataset does not directly support interpretation of the amount of dependence or the amount of groundwater used by the regions highlighted within the maps. Further analysis and more detailed field based data collection are required to support this.
The core data used in the modelling is largely circa 1995 to 2005. It is expected that the methodology used will over estimate the extent of terrestrial GDEs. There will be locations that appear from EvapoTranspiration (ET) data to fulfil the definition of a GDE (as defined by the mapping model) that may not be using groundwater. Two prominent examples are: 1. Riparian zones along sections of rivers and creeks that have deep water tables where the stream feeds the groundwater system and the riparian vegetation is able to access this water flow, as well as any bank storage contained in the valley alluvials. 2. Forested regions that are accessing large unsaturated regolith water stores. The terrestrial GDE layer polygons are classified based on the expected depth to groundwater (ie shallow <5 m or deep >5 m). Additional landscape attributes are also assigned to each mappnig polygon.
In 2011-2012 a species tolerance model was developed by Arthur Rylah Institute, collaborating with DPI, to model landscapes with ability to support GDEs and to provide a relative measure of sensitivity of those ecosystems to changes in groundwater availability and quality. Rev 1 of the GDE mapping incorporates species tolerance model attributes for each potential GDE polygon and attributes for interpreted depth to
groundwater. Separate datasets and associated metadata records have been created for GDE species tolerance.
Data Set Source:
The maps are created from base layers of Landsat imagery from 1988 to 2005 that was supplied by the Australian Greenhouse Office (AGO) and time series data from the NASA WIST website supplied the Modis (MOD13Q1) product for 2003. Additional base layers that were used include the Geological 250 series, Geomorphological Management Units, Wetland and EVC Layer Stream Gauge catchments. Expert analyses selected thresholds within the data sets that were combined within a Weighted Overlay Model and converted to Shape files for use.
Collection Method:
Not available (see Abstract for context and background information).
Processing Steps:
Not available (see Abstract for context and background information).
For further information, see:
Dresel PE, Clark R, Cheng X, Reid M, Terry A, Fawcett J and Cochrane D 2010, Mapping Terrestrial Groundwater Dependent Ecosystems: Method Development and Example Output, Department of Primary Industries, Melbourne, Victoria, 66 pp.
Victorian Department of Environment and Primary Industries (2014) Potential Groundwater Dependent Ecosystems for West Gippsland Catchment Management Authority. Bioregional Assessment Source Dataset. Viewed 07 February 2017, http://data.bioregionalassessments.gov.au/dataset/e5499022-8e74-4412-8516-c66d163e6e75.
Facebook
TwitterInformation was reported as correct by central government departments at 29 February 2012.
In its Structural Reform plan, the Cabinet Office committed to begin quarterly publication of the number of open websites starting in financial year 2011.
The definition used of a website is a user-centric one. Something is counted as a separate website if it is active and either has a separate domain name or, when as a subdomain, the user cannot move freely between the subsite and parent site and there is no family likeness in the design. In other words, if the user experiences it as a separate site in their normal uses of browsing, search and interaction, it is counted as one.
A website is considered closed when it ceases to be actively funded, run and managed by central government, either by packaging information and putting it in the right place for the intended audience on another website or digital channel, or by a third party taking and managing it and bearing the cost. Where appropriate, domains stay operational in order to redirect users to the http://www.nationalarchives.gov.uk/webarchive/">UK Government Website Archive.
The GOV.UK exemption process began with a web rationalisation of the government’s Internet estate to reduce the number of obsolete websites and to establish the scale of the websites that the government owns.
Not included in the number or list are websites of public corporations as listed on the Office for National Statistics website, partnerships more than half-funded by private sector, charities and national museums. Specialist closed audience functions, such as the BIS Research Councils, BIS Sector Skills Councils and Industrial Training Boards, and the Defra Levy Boards and their websites, are not included in this data. The Ministry of Defence conducted their own rationalisation of MOD and the armed forces sites as an integral part of the Website Review; military sites belonging to a particular service are excluded from this dataset. Finally, those public bodies set up by Parliament and reporting directly to the Speaker’s Committee and only reporting through a ministerial government department for the purposes of enaction of legislation are also excluded (for example, the Electoral Commission and IPSA).
Websites are listed under the department name for which the minister in HMG has responsibility, either directly through their departmental activities, or indirectly through being the minister reporting to Parliament for independent bodies set up by statute.
For re-usability, these are provided as Excel and CSV files.
Facebook
TwitterWarning: as of June 2020, this dataset is no longer updated and has been replaced. Please see https://www.donneesquebec.ca/recherche/fr/dataset/evenements-de-securite-civile for data on civil security events since June 2020. This database brings together in a structured way information related to past claims that have been systematically grouped and centralized by the Ministry of Public Security (MSP). The consequences and evolution of the events are documented and they have been categorized according to their level of impact on the safety of citizens, goods and services to the population based on criteria defined in the Canadian profile of the Common Alert Protocol. It is updated continuously by the MSP Operations Department (DO). This database will allow analyses to be carried out at regional and local levels and can be used by municipalities in the implementation of their emergency measures plans. The event history archives come from event reports and status reports that were produced by the Government Operations Center (COG) and by the regional directorates of the MSP. Among other things, it includes: 1- Observations entered directly into the Geoportal by civil security advisers from regional directorates; 2- A compilation of information recorded in COG event reports and DO status reports distributed to MSP partners since 1996; 3- A compilation of information contained in the files of the regional directorates. This may be information on paper, event reports or field visits, paper or digital maps, etc. The information in this database is in accordance with the Canadian Common Alert Protocol Profile (PC-PAC). The PC-PAC is a set of rules and controlled values that support the translation and composition of a message to make it possible to send it by different means and from different sources. The severity level is an attribute defined in the PC-PAC. It is used to characterize the severity level of the event based on the harm to the lives of people or damage to property. This severity level is defined by the following characteristics: Extreme: extraordinary threat to life or property; Important: significant threat to life or property; Moderate: possible threat to life or property; Minor: low or non-existent threat to life or property; Minor: low or non-existent threat to life or property; Unknown: unknown severity, used among other things during tests and exercises. The emergency level is determined based on the reactive measures that need to be taken in response to the current situation. It is defined by the following characteristics: Immediate: a reactive action must be taken immediately; Planned: a reactive action must be taken soon (within the next hour); Future: a reactive action must be taken in the near future; Past: a reactive measure is no longer necessary; Unknown: Unknown: Unknown emergency, to be used during tests and exercises. The state relates to the context of the event, real or simulated. It is defined by the following characteristics: Current: information on a real event or situation; Exercise: fictional or real information carried out as part of a civil security exercise; Test: technical tests only; to be ignored by all. Certainty is defined by the following characteristics: Observed: would have occurred or is currently taking place; Probable: probability of the event happening > 50%; Possible: probability of the event happening < 50%; Unlikely: probability of the event happening around 0%; Unlikely: probability of the event happening around 0%; Unknown: unknown certainty. When an event date was not known, the year 1900-01-01 was recorded. ATTRIBUTE DESCRIPTION: Date of observation: date of the event or observation; Type: name of the hazard; Name: name of the municipality; Municipality code: municipal code; State and certainty: as these are real events, the state is generally “current” and the certainty is generally “observed”; Emergency: the term “past” was generally used for events that occurred before the compilation work was carried out; Inprecision: imprecision: imprecision is generally “observed”; Urgency: the term “past” was generally used for events that occurred before the compilation work was carried out; Inaccuracy: imprecision: imprecision precision in a data (the date of the event, its location, the source of the data or none inaccuracy noted).This third party metadata element was translated using an automated translation tool (Amazon Translate).
Facebook
TwitterLondon’s first Cultural Infrastructure Map brings together new research and information that has previously not existed in one place. It plots the location of cultural infrastructure and enables the user to view it alongside useful contextual data. This page contains cultural infrastructure data sets collected in the spring and summer of 2022 and published in 2023. Audits of facilities or infrastructure are a snapshot in time and based on best available information. We welcome contributions or updates to the datasets from Londoners and others which can be submitted through the Cultural Infrastructure Map . Since the previous data sets were published in 2019, the definition and typologies of premises that feed into the ‘Music venues all’ category have been changed to ensure that the category is mapped in an improved consistency. Changes mean that the 2019 and 2023 datasets aren’t directly comparable. Data and analysis from GLA GIS Team form a basis for the policy and investment decisions facing the Mayor of London and the GLA group. GLA Intelligence uses a wide range of information and data sourced from third party suppliers within its analysis and reports. GLA Intelligence cannot be held responsible for the accuracy or timeliness of this information and data. The GLA will not be liable for any losses suffered or liabilities incurred by a party as a result of that party relying in any way on the information contained in this report. Contains OS data © Crown copyright and database rights 2019. Contains Audience Agency data. Contains CAMRA data. NOTE: The data is based on Ordnance Survey mapping and the data is published under Ordnance Survey’s ‘presumption to publish’. NOTE: This page contains cultural infrastructure data collected in the spring and summer of 2022 and published in 2023. For 2019 cultural infrastructure data, please visit: https://data.london.gov.uk/dataset/cultural-infrastructure-map
Facebook
TwitterThis dataset and its metadata statement were supplied to the Bioregional Assessment Programme by a third party and are presented here as originally supplied.
This dataset and its metadata statement were supplied to the Bioregional Assessment Programme by a third party and are presented here as originally supplied.
The Collaborative Australian Protected Areas Database (CAPAD) 2010 provides both spatial and text information about government, Indigenous and privately protected areas for continental Australia. State and Territory conservation agencies supplied data current for various dates between June 2010 and January 2011. This is the eighth version of the database, with previous versions released in 1997, 1999, 2000, 2002, 2004, 2006 and 2008. CAPAD provides a snapshot of protected areas that meet the IUCN definition of a protected area:
"A protected area is an area of land and/or sea especially dedicated to the protection and maintenance of biological diversity, and of natural and associated cultural resources, and managed through legal or other effective means" (IUCN 1994).
The department publishes a summary of the CAPAD data biannually on its website at http://www.environment.gov.au/parks/nrs/science/capad/index.html.
This version of CAPAD 2010 is for USE BY NON-COMMERCIAL USERS ONLY. It contains all data supplied by Victoria and South Australian Forests data. It is available for non-commercial use by agreeing to the license conditions. If commercial use is sought an approach needs to be made to the individual data suppliers. CAPAD 2010 - restricted spatial data is available for password protected download from the Discover Information Geographically (DIG) website: http://www.environment.gov.au/metadataexplorer/explorer.jsp.
See metadata document CAPAD2010restrictedMetadata.htm stored with the data for a list of the Main attributes in the database.
The Collaborative Australian Protected Areas Database (CAPAD) 2010 provides both spatial and text information about government, Indigenous, private and jointly managed protected areas for continental Australia. State and Territory conservation agencies supplied data current for various dates between July 2010 and January 2011. This is the seventh version of the database, with previous versions released in 1997, 1999, 2000, 2002, 2004, 2006 and 2008. CAPAD provides a snapshot of protected areas that meet the IUCN definition of a protected area.
"Department of Sustainability, Environment, Water, Population and Communities" (2013) Collaborative Australian Protected Areas Database (CAPAD) 2010 - External Restricted. Bioregional Assessment Source Dataset. Viewed 11 December 2018, http://data.bioregionalassessments.gov.au/dataset/47312aee-722e-4c6e-bef8-9e439480503e.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Facebook
TwitterThese themes present the compilation of protected areas in Quebec. They also include important territories. Protected areas include a variety of different legal or administrative designations. Territories registered on the Register of Protected Areas must meet the definition of protected area under the Natural Heritage Conservation Act (LCPN; RLRQ, chapter C-61.01) or that of the International Union for the Conservation of Nature (IUCN). The definition of the LCPN is “A protected area is a territory, in a terrestrial or aquatic environment, geographically delimited, whose legal framework and administration are aimed specifically at ensuring the protection and maintenance of biological diversity and associated natural and cultural resources”. While IUCN defines it as “A clearly defined, recognized, recognized, dedicated and managed geographic space, by any effective legal or other means, in order to ensure the long-term conservation of nature and associated ecosystem services and cultural values”. An area of importance for conservation is a geographically delimited territory, for which the Ministry of the Environment and the Fight against Climate Change (MELCC) or an authority of the Government of Quebec has expressed its intention to prioritize its allocation for the purposes of protected areas.This third party metadata element was translated using an automated translation tool (Amazon Translate).
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
TwitterDRAKO is a Mobile Location Audience Targeting provider with a programmatic trading desk specialising in geolocation analytics and programmatic advertising. Through our customised approach, we offer business and consumer insights as well as addressable audiences for advertising.
Mobile Location Data can be meaningfully transformed into Audience Targeting when used in conjunction with other dataset. Our expansive POI Data allows us to segment users by visitation to major brands and retailers as well as categorizes them into syndicated segments. Beyond POI visits, our proprietary Home Location Model determines residents of geographic areas such as Designated Market Areas, Counties, or States. Relatedly, our Home Location Model also fuels our Geodemographic Census Data segments as we are able to determine residents of the smallest census units. Additionally, we also have audiences of: ticketed event and venue visitors; survey data; and retail data.
All of our Audience Targeting is 100% deterministic in that it only includes high-quality, real visits to locations as defined by a POIs satellite imagery buildings contour. We never use a radius when building an audience unless requested. We have a horizontal accuracy of 5m.
Additionally, we can always cross reference your audience targeting with our syndicated segments:
Overview of our Syndicated Audience Data Segments: - Brand/POI segments (specific named stores and locations) - Categories (behavioural segments - revealed habits) - Census demographic segments (HH income, race, religion, age, family structure, language, etc.,) - Events segments (ticketed live events, conferences, and seminars) - Resident segments (State/province, CMAs, DMAs, city, county, sub-county) - Political segments (Canadian Federal and Provincial, US Congressional Upper and Lower House, US States, City elections, etc.,) - Survey Data (Psychosocial/Demographic survey data) - Retail Data (Receipt/transaction data)
All of our syndicated segments are customizable. That means you can limit them to people within a certain geography, remove employees, include only the most frequent visitors, define your own custom lookback, or extend our audiences using our Home, Work, and Social Extensions.
In addition to our syndicated segments, we’re also able to run custom queries return to you all the Mobile Ad IDs (MAIDs) seen at in a specific location (address; latitude and longitude; or WKT84 Polygon) or in your defined geographic area of interest (political districts, DMAs, Zip Codes, etc.,)
Beyond just returning all the MAIDs seen within a geofence, we are also able to offer additional customizable advantages: - Average precision between 5 and 15 meters - CRM list activation + extension - Extend beyond Mobile Location Data (MAIDs) with our device graph - Filter by frequency of visitations - Home and Work targeting (retrieve only employees or residents of an address) - Home extensions (devices that reside in the same dwelling from your seed geofence) - Rooftop level address geofencing precision (no radius used EVER unless user specified) - Social extensions (devices in the same social circle as users in your seed geofence) - Turn analytics into addressable audiences - Work extensions (coworkers of users in your seed geofence)
Data Compliance: All of our Audience Targeting Data is fully CCPA compliant and 100% sourced from SDKs (Software Development Kits), the most reliable and consistent mobile data stream with end user consent available with only a 4-5 day delay. This means that our location and device ID data comes from partnerships with over 1,500+ mobile apps. This data comes with an associated location which is how we are able to segment using geofences.
Data Quality: In addition to partnering with trusted SDKs, DRAKO has additional screening methods to ensure that our mobile location data is consistent and reliable. This includes data harmonization and quality scoring from all of our partners in order to disregard MAIDs with a low quality score.