Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Google data search exercises can be used to practice finding data or statistics on a topic of interest, including using Google's own internal tools and by using advanced operators.
Facebook
TwitterOpenWeb Ninja's Google Images Data (Google SERP Data) API provides real-time image search capabilities for images sourced from all public sources on the web.
The API enables you to search and access more than 100 billion images from across the web including advanced filtering capabilities as supported by Google Advanced Image Search. The API provides Google Images Data (Google SERP Data) including details such as image URL, title, size information, thumbnail, source information, and more data points. The API supports advanced filtering and options such as file type, image color, usage rights, creation time, and more. In addition, any Advanced Google Search operators can be used with the API.
OpenWeb Ninja's Google Images Data & Google SERP Data API common use cases:
Creative Media Production: Enhance digital content with a vast array of real-time images, ensuring engaging and brand-aligned visuals for blogs, social media, and advertising.
AI Model Enhancement: Train and refine AI models with diverse, annotated images, improving object recognition and image classification accuracy.
Trend Analysis: Identify emerging market trends and consumer preferences through real-time visual data, enabling proactive business decisions.
Innovative Product Design: Inspire product innovation by exploring current design trends and competitor products, ensuring market-relevant offerings.
Advanced Search Optimization: Improve search engines and applications with enriched image datasets, providing users with accurate, relevant, and visually appealing search results.
OpenWeb Ninja's Annotated Imagery Data & Google SERP Data Stats & Capabilities:
100B+ Images: Access an extensive database of over 100 billion images.
Images Data from all Public Sources (Google SERP Data): Benefit from a comprehensive aggregation of image data from various public websites, ensuring a wide range of sources and perspectives.
Extensive Search and Filtering Capabilities: Utilize advanced search operators and filters to refine image searches by file type, color, usage rights, creation time, and more, making it easy to find exactly what you need.
Rich Data Points: Each image comes with more than 10 data points, including URL, title (annotation), size information, thumbnail, and source information, providing a detailed context for each image.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
if you’re looking for a job in data analytics, you’ll need a portfolio to demonstrate your expertise. Of course, if you’re new to data analytics, you probably don’t have much expertise! Not to worry. The fact you might not have worked on a paid project yet doesn’t mean you can’t whip up a compelling portfolio using some practice datasets.
Fortunately, the Internet is awash with these, most of which are completely free to download (thanks to the open data initiative). In this post, we’ll highlight a few first-rate repositories where you can find data on everything from business to finance, planetary science and crime.
Prefer to watch this information over reading it? Check out this video on dataset resources, presented by our very own in-house data scientist, Tom!
It seems we turn to Google for everything these days, and data is no exception. Launched in 2018, Google Dataset Search is like Google’s standard search engine, but strictly for data.
While it’s not the best tool if you prefer to browse, if you have a particular topic or keyword in mind, it won’t disappoint. Google Dataset Search aggregates data from external sources, providing a clear summary of what’s available, a description of the data, who it’s provided by, and when it was last updated. It’s an excellent place to start.
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Azerbaijani Google Search Results URLs Dataset
Overview
The dataset includes multiple entries for each keyword, capturing different URLs and titles that were returned by Google. This allows researchers and developers to easily collect URLs for scraping content related to specific Azerbaijani keywords.
Structure
The dataset is structured as follows:
Column Name Description
keyword The search term entered into Google.
title The title of the webpage… See the full description on the dataset page: https://huggingface.co/datasets/LocalDoc/google_search_results_dataset_azerbaijan.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Recipe keywords' positions on search; Google and YouTube.
These datasets can be interesting for SEO research for the recipes industry.
243 national recipes (based on Wikipedia's national dish list)
2 keyword versions dish recipe and how to make dish
Total 486 queries (10 results each)
Google: 4,860 rows (defaults to 10 per result, and some missing)
YouTube: 1,455 rows (defaults to 5 per result, and some missing)
Google CSE API, YouTube API, Python, requests, pandas, advertools.
It's interesting to know about how things are visible from a search engine perspective, and compare Google and YouTube as well.
National dishes are mostly delicious as well!
Facebook
TwitterYou can check the fields description in the documentation: current Keyword database: https://docs.dataforseo.com/v3/databases/google/keywords/?bash; Historical Keyword database: https://docs.dataforseo.com/v3/databases/google/history/keywords/?bash. You don’t have to download fresh data dumps in JSON or CSV – we can deliver data straight to your storage or database. We send terrabytes of data to dozens of customers every month using Amazon S3, Google Cloud Storage, Microsoft Azure Blob, Eleasticsearch, and Google Big Query. Let us know if you’d like to get your data to any other storage or database.
Facebook
TwitterYou can check the fields description in the documentation: current Full database: https://docs.dataforseo.com/v3/databases/google/full/?bash; Historical Full database: https://docs.dataforseo.com/v3/databases/google/history/full/?bash.
Full Google Database is a combination of the Advanced Google SERP Database and Google Keyword Database.
Google SERP Database offers millions of SERPs collected in 67 regions with most of Google’s advanced SERP features, including featured snippets, knowledge graphs, people also ask sections, top stories, and more.
Google Keyword Database encompasses billions of search terms enriched with related Google Ads data: search volume trends, CPC, competition, and more.
This database is available in JSON format only.
You don’t have to download fresh data dumps in JSON – we can deliver data straight to your storage or database. We send terrabytes of data to dozens of customers every month using Amazon S3, Google Cloud Storage, Microsoft Azure Blob, Eleasticsearch, and Google Big Query. Let us know if you’d like to get your data to any other storage or database.
Facebook
TwitterThe DCAT extension for CKAN enhances data portals by enabling the exposure and consumption of metadata using the DCAT vocabulary, facilitating interoperability with other data catalogs. It provides tools for serializing CKAN datasets as RDF documents and harvesting RDF data from external sources, promoting data sharing and reuse. The extension supports various DCAT Application Profiles, and includes features for adapting schemas, validating data, and integrating with search engines like Google Dataset Search. Key Features: DCAT Schemas: Offers pre-built CKAN schemas for common Application Profiles (DCAT AP v1, v2, and v3), which can be customized to align with site-specific requirements. These schemas include tailored form fields and validation rules to ensure DCAT compatibility. DCAT Endpoints: Exposes catalog datasets in different RDF serializations, allowing external systems to easily consume CKAN metadata in a standardized format. RDF Harvester: Enables the import of RDF serializations from other catalogs, automatically creating CKAN datasets based on the harvested metadata. This promotes data aggregation and discovery across different data sources. DCAT-CKAN Mapping: Establishes a base mapping between DCAT and CKAN datasets, facilitating bidirectional transformation of metadata. The mapping is compatible with DCAT-AP v1.1, v2.1, and v3. RDF Parser and Serializer: Includes an RDF parser for extracting CKAN dataset dictionaries from RDF serializations and an RDF serializer for transforming CKAN dataset metadata into different semantic formats. Both components are customizable through profiles. Command Line Interface (CLI): Provides a command-line interface for managing and interacting with the extension's features, such as harvesting and data transformation tasks. Google Dataset Search Integration: Offers support for indexing datasets in Google Dataset Search, improving the visibility of CKAN datasets to a wider audience. Technical Integration: The ckanext-dcat extension extends CKAN's functionality by adding new plugins for RDF harvesting and serialization, allowing users to expose and consume DCAT metadata through the portal and enabling dataset enrichment from external sources. This integration can be customized through profiles that define custom data mappings. Benefits & Impact: By implementing the DCAT extension, CKAN-based data portals can significantly improve their interoperability with other data catalogs and data repositories that support DCAT. This facilitates data sharing, reuse, and discovery, as well as improves the visibility of datasets through indexing in services like Google Dataset Search. The extension's built-in schemas and validation rules ensure that CKAN metadata conforms to DCAT standards, while the RDF harvester simplifies the process of importing data from external sources. Funded by organizations like the Government of Sweden, Vinnova, and FIWARE, the extension has been developed for production use cases and promotes a data-driven ecosystem.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This “dataset of metadata” contains paper--dataset pairs of datasets mentioned or referenced in papers comprising the CORD-19 dataset. CORD-19 is an open research dataset on COVID-19 produced by multiple institutions, and Google’s Dataset Search team has enhanced this dataset with additional metadata. Specifically, the metadata for these datasets was collected from their descriptions in schema.org mark-up across various data repositories on the Web.
Each row of the table is a paper-dataset pair, with cord_uid, paper title and url from the CORD-19 dataset and the metadata for a dataset.
Does the linked data provide additional insights into the content of the papers?
Google's Dataset Search is a tool that makes it easier for researchers, students, and data geeks to discover datasets that they need for their work. It is built on the idea that metadata and data should be open whenever possible.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The goal of this research is to examine direct answers in Google web search engine. Dataset was collected using Senuto (https://www.senuto.com/). Senuto is as an online tool, that extracts data on websites visibility from Google search engine.
Dataset contains the following elements:
keyword,
number of monthly searches,
featured domain,
featured main domain,
featured position,
featured type,
featured url,
content,
content length.
Dataset with visibility structure has 743 798 keywords that were resulting in SERPs with direct answer.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains data collected during a study ("Towards High-Value Datasets determination for data-driven development: a systematic literature review") conducted by Anastasija Nikiforova (University of Tartu), Nina Rizun, Magdalena Ciesielska (Gdańsk University of Technology), Charalampos Alexopoulos (University of the Aegean) and Andrea Miletič (University of Zagreb) It being made public both to act as supplementary data for "Towards High-Value Datasets determination for data-driven development: a systematic literature review" paper (pre-print is available in Open Access here -> https://arxiv.org/abs/2305.10234) and in order for other researchers to use these data in their own work.
The protocol is intended for the Systematic Literature review on the topic of High-value Datasets with the aim to gather information on how the topic of High-value datasets (HVD) and their determination has been reflected in the literature over the years and what has been found by these studies to date, incl. the indicators used in them, involved stakeholders, data-related aspects, and frameworks. The data in this dataset were collected in the result of the SLR over Scopus, Web of Science, and Digital Government Research library (DGRL) in 2023.
Methodology
To understand how HVD determination has been reflected in the literature over the years and what has been found by these studies to date, all relevant literature covering this topic has been studied. To this end, the SLR was carried out to by searching digital libraries covered by Scopus, Web of Science (WoS), Digital Government Research library (DGRL).
These databases were queried for keywords ("open data" OR "open government data") AND ("high-value data*" OR "high value data*"), which were applied to the article title, keywords, and abstract to limit the number of papers to those, where these objects were primary research objects rather than mentioned in the body, e.g., as a future work. After deduplication, 11 articles were found unique and were further checked for relevance. As a result, a total of 9 articles were further examined. Each study was independently examined by at least two authors.
To attain the objective of our study, we developed the protocol, where the information on each selected study was collected in four categories: (1) descriptive information, (2) approach- and research design- related information, (3) quality-related information, (4) HVD determination-related information.
Test procedure Each study was independently examined by at least two authors, where after the in-depth examination of the full-text of the article, the structured protocol has been filled for each study. The structure of the survey is available in the supplementary file available (see Protocol_HVD_SLR.odt, Protocol_HVD_SLR.docx) The data collected for each study by two researchers were then synthesized in one final version by the third researcher.
Description of the data in this data set
Protocol_HVD_SLR provides the structure of the protocol Spreadsheets #1 provides the filled protocol for relevant studies. Spreadsheet#2 provides the list of results after the search over three indexing databases, i.e. before filtering out irrelevant studies
The information on each selected study was collected in four categories: (1) descriptive information, (2) approach- and research design- related information, (3) quality-related information, (4) HVD determination-related information
Descriptive information
1) Article number - a study number, corresponding to the study number assigned in an Excel worksheet
2) Complete reference - the complete source information to refer to the study
3) Year of publication - the year in which the study was published
4) Journal article / conference paper / book chapter - the type of the paper -{journal article, conference paper, book chapter}
5) DOI / Website- a link to the website where the study can be found
6) Number of citations - the number of citations of the article in Google Scholar, Scopus, Web of Science
7) Availability in OA - availability of an article in the Open Access
8) Keywords - keywords of the paper as indicated by the authors
9) Relevance for this study - what is the relevance level of the article for this study? {high / medium / low}
Approach- and research design-related information 10) Objective / RQ - the research objective / aim, established research questions 11) Research method (including unit of analysis) - the methods used to collect data, including the unit of analy-sis (country, organisation, specific unit that has been ana-lysed, e.g., the number of use-cases, scope of the SLR etc.) 12) Contributions - the contributions of the study 13) Method - whether the study uses a qualitative, quantitative, or mixed methods approach? 14) Availability of the underlying research data- whether there is a reference to the publicly available underly-ing research data e.g., transcriptions of interviews, collected data, or explanation why these data are not shared? 15) Period under investigation - period (or moment) in which the study was conducted 16) Use of theory / theoretical concepts / approaches - does the study mention any theory / theoretical concepts / approaches? If any theory is mentioned, how is theory used in the study?
Quality- and relevance- related information
17) Quality concerns - whether there are any quality concerns (e.g., limited infor-mation about the research methods used)?
18) Primary research object - is the HVD a primary research object in the study? (primary - the paper is focused around the HVD determination, sec-ondary - mentioned but not studied (e.g., as part of discus-sion, future work etc.))
HVD determination-related information
19) HVD definition and type of value - how is the HVD defined in the article and / or any other equivalent term?
20) HVD indicators - what are the indicators to identify HVD? How were they identified? (components & relationships, “input -> output")
21) A framework for HVD determination - is there a framework presented for HVD identification? What components does it consist of and what are the rela-tionships between these components? (detailed description)
22) Stakeholders and their roles - what stakeholders or actors does HVD determination in-volve? What are their roles?
23) Data - what data do HVD cover?
24) Level (if relevant) - what is the level of the HVD determination covered in the article? (e.g., city, regional, national, international)
Format of the file .xls, .csv (for the first spreadsheet only), .odt, .docx
Licenses or restrictions CC-BY
For more info, see README.txt
Facebook
TwitterThe Google Trends dataset will provide critical signals that individual users and businesses alike can leverage to make better data-driven decisions. This dataset simplifies the manual interaction with the existing Google Trends UI by automating and exposing anonymized, aggregated, and indexed search data in BigQuery. This dataset includes the Top 25 stories and Top 25 Rising queries from Google Trends. It will be made available as two separate BigQuery tables, with a set of new top terms appended daily. Each set of Top 25 and Top 25 rising expires after 30 days, and will be accompanied by a rolling five-year window of historical data in 210 distinct locations in the United States. This Google dataset is hosted in Google BigQuery as part of Google Cloud's Datasets solution and is included in BigQuery's 1TB/mo of free tier processing. This means that each user receives 1TB of free BigQuery processing every month, which can be used to run queries on this public dataset. Watch this short video to learn how to get started quickly using BigQuery to access public datasets. What is BigQuery
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Google My Business (GMB) is a platform designed to help you share detailed information about your business when it appears in search results. In addition to a URL and description, you can include photos, videos, contact numbers, operating hours, delivery zones, and links to booking services. Google My Business enables you to create eye-catching listings that enhance visibility when customers search online. It allows your in-store products to be displayed directly on your Google Business Profile. A cover photo, along with previews from Google Maps and Google Street View, gives potential customers a clear idea of what to expect when they visit. However, keep in mind that users can suggest changes to your profile, so it’s important to review it frequently to ensure accuracy.
Google My Business also highlights key factors to consider for verifying your business presence and enhancing your local search visibility through optimization.
Data Dictionary
| Column Name | Data Type | Description |
|---|---|---|
location_id | Integer | Unique identifier for each location. |
location_name | String | Name of the business or location. |
address | String | Full address of the location. |
phone_numbers | String/NaN | Contact phone number(s) for the business (if available). |
latitude | Float | Geographic coordinate (latitude) of the location. |
longitude | Float | Geographic coordinate (longitude) of the location. |
price | String/NaN | Price range of services or products offered (e.g., "SGD 1–10"). |
regular_hours | Dictionary | Business hours for each day of the week. |
service_options | Dictionary | Available service options (e.g., dine-in, takeout, delivery). |
average_rating | Float | Customer rating of the business (e.g., 4.5). |
labels | String | Category or type of business (e.g., "Halal restaurant"). |
This dataset, created by Agung Pambudi, is entirely original and has not been shared previously. It is distributed under the CC BY 4.0 license, which permits unrestricted use, provided the author is appropriately credited. A DOI is included to ensure accurate citation. Please be aware that duplicating this work on Kaggle is prohibited.
Facebook
TwitterDataForSEO Labs API offers three powerful keyword research algorithms and historical keyword data:
• Related Keywords from the “searches related to” element of Google SERP. • Keyword Suggestions that match the specified seed keyword with additional words before, after, or within the seed key phrase. • Keyword Ideas that fall into the same category as specified seed keywords. • Historical Search Volume with current cost-per-click, and competition values.
Based on in-market categories of Google Ads, you can get keyword ideas from the relevant Categories For Domain and discover relevant Keywords For Categories. You can also obtain Top Google Searches with AdWords and Bing Ads metrics, product categories, and Google SERP data.
You will find well-rounded ways to scout the competitors:
• Domain Whois Overview with ranking and traffic info from organic and paid search. • Ranked Keywords that any domain or URL has positions for in SERP. • SERP Competitors and the rankings they hold for the keywords you specify. • Competitors Domain with a full overview of its rankings and traffic from organic and paid search. • Domain Intersection keywords for which both specified domains rank within the same SERPs. • Subdomains for the target domain you specify along with the ranking distribution across organic and paid search. • Relevant Pages of the specified domain with rankings and traffic data. • Domain Rank Overview with ranking and traffic data from organic and paid search. • Historical Rank Overview with historical data on rankings and traffic of the specified domain from organic and paid search. • Page Intersection keywords for which the specified pages rank within the same SERP.
All DataForSEO Labs API endpoints function in the Live mode. This means you will be provided with the results in response right after sending the necessary parameters with a POST request.
The limit is 2000 API calls per minute, however, you can contact our support team if your project requires higher rates.
We offer well-rounded API documentation, GUI for API usage control, comprehensive client libraries for different programming languages, free sandbox API testing, ad hoc integration, and deployment support.
We have a pay-as-you-go pricing model. You simply add funds to your account and use them to get data. The account balance doesn't expire.
Facebook
TwitterUnited States agricultural researchers have many options for making their data available online. This dataset aggregates the primary sources of ag-related data and determines where researchers are likely to deposit their agricultural data. These data serve as both a current landscape analysis and also as a baseline for future studies of ag research data. Purpose As sources of agricultural data become more numerous and disparate, and collaboration and open data become more expected if not required, this research provides a landscape inventory of online sources of open agricultural data. An inventory of current agricultural data sharing options will help assess how the Ag Data Commons, a platform for USDA-funded data cataloging and publication, can best support data-intensive and multi-disciplinary research. It will also help agricultural librarians assist their researchers in data management and publication. The goals of this study were to establish where agricultural researchers in the United States-- land grant and USDA researchers, primarily ARS, NRCS, USFS and other agencies -- currently publish their data, including general research data repositories, domain-specific databases, and the top journals compare how much data is in institutional vs. domain-specific vs. federal platforms determine which repositories are recommended by top journals that require or recommend the publication of supporting data ascertain where researchers not affiliated with funding or initiatives possessing a designated open data repository can publish data Approach The National Agricultural Library team focused on Agricultural Research Service (ARS), Natural Resources Conservation Service (NRCS), and United States Forest Service (USFS) style research data, rather than ag economics, statistics, and social sciences data. To find domain-specific, general, institutional, and federal agency repositories and databases that are open to US research submissions and have some amount of ag data, resources including re3data, libguides, and ARS lists were analysed. Primarily environmental or public health databases were not included, but places where ag grantees would publish data were considered. Search methods We first compiled a list of known domain specific USDA / ARS datasets / databases that are represented in the Ag Data Commons, including ARS Image Gallery, ARS Nutrition Databases (sub-components), SoyBase, PeanutBase, National Fungus Collection, i5K Workspace @ NAL, and GRIN. We then searched using search engines such as Bing and Google for non-USDA / federal ag databases, using Boolean variations of “agricultural data” /“ag data” / “scientific data” + NOT + USDA (to filter out the federal / USDA results). Most of these results were domain specific, though some contained a mix of data subjects. We then used search engines such as Bing and Google to find top agricultural university repositories using variations of “agriculture”, “ag data” and “university” to find schools with agriculture programs. Using that list of universities, we searched each university web site to see if their institution had a repository for their unique, independent research data if not apparent in the initial web browser search. We found both ag specific university repositories and general university repositories that housed a portion of agricultural data. Ag specific university repositories are included in the list of domain-specific repositories. Results included Columbia University – International Research Institute for Climate and Society, UC Davis – Cover Crops Database, etc. If a general university repository existed, we determined whether that repository could filter to include only data results after our chosen ag search terms were applied. General university databases that contain ag data included Colorado State University Digital Collections, University of Michigan ICPSR (Inter-university Consortium for Political and Social Research), and University of Minnesota DRUM (Digital Repository of the University of Minnesota). We then split out NCBI (National Center for Biotechnology Information) repositories. Next we searched the internet for open general data repositories using a variety of search engines, and repositories containing a mix of data, journals, books, and other types of records were tested to determine whether that repository could filter for data results after search terms were applied. General subject data repositories include Figshare, Open Science Framework, PANGEA, Protein Data Bank, and Zenodo. Finally, we compared scholarly journal suggestions for data repositories against our list to fill in any missing repositories that might contain agricultural data. Extensive lists of journals were compiled, in which USDA published in 2012 and 2016, combining search results in ARIS, Scopus, and the Forest Service's TreeSearch, plus the USDA web sites Economic Research Service (ERS), National Agricultural Statistics Service (NASS), Natural Resources and Conservation Service (NRCS), Food and Nutrition Service (FNS), Rural Development (RD), and Agricultural Marketing Service (AMS). The top 50 journals' author instructions were consulted to see if they (a) ask or require submitters to provide supplemental data, or (b) require submitters to submit data to open repositories. Data are provided for Journals based on a 2012 and 2016 study of where USDA employees publish their research studies, ranked by number of articles, including 2015/2016 Impact Factor, Author guidelines, Supplemental Data?, Supplemental Data reviewed?, Open Data (Supplemental or in Repository) Required? and Recommended data repositories, as provided in the online author guidelines for each the top 50 journals. Evaluation We ran a series of searches on all resulting general subject databases with the designated search terms. From the results, we noted the total number of datasets in the repository, type of resource searched (datasets, data, images, components, etc.), percentage of the total database that each term comprised, any dataset with a search term that comprised at least 1% and 5% of the total collection, and any search term that returned greater than 100 and greater than 500 results. We compared domain-specific databases and repositories based on parent organization, type of institution, and whether data submissions were dependent on conditions such as funding or affiliation of some kind. Results A summary of the major findings from our data review: Over half of the top 50 ag-related journals from our profile require or encourage open data for their published authors. There are few general repositories that are both large AND contain a significant portion of ag data in their collection. GBIF (Global Biodiversity Information Facility), ICPSR, and ORNL DAAC were among those that had over 500 datasets returned with at least one ag search term and had that result comprise at least 5% of the total collection. Not even one quarter of the domain-specific repositories and datasets reviewed allow open submission by any researcher regardless of funding or affiliation. See included README file for descriptions of each individual data file in this dataset. Resources in this dataset:Resource Title: Journals. File Name: Journals.csvResource Title: Journals - Recommended repositories. File Name: Repos_from_journals.csvResource Title: TDWG presentation. File Name: TDWG_Presentation.pptxResource Title: Domain Specific ag data sources. File Name: domain_specific_ag_databases.csvResource Title: Data Dictionary for Ag Data Repository Inventory. File Name: Ag_Data_Repo_DD.csvResource Title: General repositories containing ag data. File Name: general_repos_1.csvResource Title: README and file inventory. File Name: README_InventoryPublicDBandREepAgData.txt
Facebook
TwitterThis tutorial will teach you how to take time-series data from many field sites and create a shareable online map, where clicking on a field location brings you to a page with interactive graph(s).
The tutorial can be completed with a sample dataset (provided via a Google Drive link within the document) or with your own time-series data from multiple field sites.
Part 1 covers how to make interactive graphs in Google Data Studio and Part 2 covers how to link data pages to an interactive map with ArcGIS Online. The tutorial will take 1-2 hours to complete.
An example interactive map and data portal can be found at: https://temple.maps.arcgis.com/apps/View/index.html?appid=a259e4ec88c94ddfbf3528dc8a5d77e8
Facebook
Twitterhttps://www.paradoxintelligence.com/termshttps://www.paradoxintelligence.com/terms
Real-time search volume and trend analysis across global markets with geographic and temporal granularity
Facebook
Twitterhttps://cdla.io/sharing-1-0/https://cdla.io/sharing-1-0/
Tracks search interest over time, showing peaks and troughs in popularity for specific keywords.
Provides data on search trends by location, allowing for geographic comparisons of interest.
Lists associated search terms, highlighting related topics that are frequently searched alongside the primary keywords.
Distinguishes between the most popular queries and those with a sharp increase in search volume.
Organizes search data by category, enabling focused insights into specific industries, interests, or demographic groups.
Offers access to both historical and real-time data, ideal for identifying ongoing or emerging trends.
Facebook
TwitterESS-DIVE’s (Environmental Systems Science Data Infrastructure for a Virtual Ecosystem) dataset metadata reporting format is intended to compile information about a dataset (e.g., title, description, funding sources) that can enable reuse of data submitted to the ESS-DIVE data repository. The files contained in this dataset include instructions (dataset_metadata_guide.md and README.md) that can be used to understand the types of metadata ESS-DIVE collects. The data dictionary (dd.csv) follows ESS-DIVE’s file-level metadata reporting format and includes brief descriptions about each element of the dataset metadata reporting format. This dataset also includes a terminology crosswalk (dataset_metadata_crosswalk.csv) that shows how ESS-DIVE’s metadata reporting format maps onto other existing metadata standards and reporting formats. Data contributors to ESS-DIVE can provide this metadata by manual entry using a web form or programmatically via ESS-DIVE’s API (Application Programming Interface). A metadata template (dataset_metadata_template.docx or dataset_metadata_template.pdf) can be used to collaboratively compile metadata before providing it to ESS-DIVE. Since being incorporated into ESS-DIVE’s data submission user interface, ESS-DIVE’s dataset metadata reporting format, has enabled features like automated metadata quality checks, and dissemination of ESS-DIVE datasets onto other data platforms including Google Dataset Search and DataCite.
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Survey response data to accompany an article in the Journal of Canadian Health Libraries Association (2025)
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Google data search exercises can be used to practice finding data or statistics on a topic of interest, including using Google's own internal tools and by using advanced operators.