30 datasets found
  1. 📣 Ad Click Prediction Dataset

    • kaggle.com
    Updated Sep 7, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ciobanu Marius (2024). 📣 Ad Click Prediction Dataset [Dataset]. https://www.kaggle.com/datasets/marius2303/ad-click-prediction-dataset
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Sep 7, 2024
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Ciobanu Marius
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    About

    This dataset provides insights into user behavior and online advertising, specifically focusing on predicting whether a user will click on an online advertisement. It contains user demographic information, browsing habits, and details related to the display of the advertisement. This dataset is ideal for building binary classification models to predict user interactions with online ads.

    Features

    • id: Unique identifier for each user.
    • full_name: User's name formatted as "UserX" for anonymity.
    • age: Age of the user (ranging from 18 to 64 years).
    • gender: The gender of the user (categorized as Male, Female, or Non-Binary).
    • device_type: The type of device used by the user when viewing the ad (Mobile, Desktop, Tablet).
    • ad_position: The position of the ad on the webpage (Top, Side, Bottom).
    • browsing_history: The user's browsing activity prior to seeing the ad (Shopping, News, Entertainment, Education, Social Media).
    • time_of_day: The time when the user viewed the ad (Morning, Afternoon, Evening, Night).
    • click: The target label indicating whether the user clicked on the ad (1 for a click, 0 for no click).

    Goal

    The objective of this dataset is to predict whether a user will click on an online ad based on their demographics, browsing behavior, the context of the ad's display, and the time of day. You will need to clean the data, understand it and then apply machine learning models to predict and evaluate data. It is a really challenging request for this kind of data. This data can be used to improve ad targeting strategies, optimize ad placement, and better understand user interaction with online advertisements.

  2. d

    Click Global Data | Web Traffic Data + Transaction Data | Consumer and B2B...

    • datarade.ai
    .csv
    Updated Mar 13, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Consumer Edge (2025). Click Global Data | Web Traffic Data + Transaction Data | Consumer and B2B Shopper Insights | 59 Countries, 3-Day Lag, Daily Delivery [Dataset]. https://datarade.ai/data-products/click-global-data-web-traffic-data-transaction-data-con-consumer-edge
    Explore at:
    .csvAvailable download formats
    Dataset updated
    Mar 13, 2025
    Dataset authored and provided by
    Consumer Edge
    Area covered
    Marshall Islands, Congo, Bermuda, South Africa, Sri Lanka, El Salvador, Nauru, Finland, Bosnia and Herzegovina, Montserrat
    Description

    Click Web Traffic Combined with Transaction Data: A New Dimension of Shopper Insights

    Consumer Edge is a leader in alternative consumer data for public and private investors and corporate clients. Click enhances the unparalleled accuracy of CE Transact by allowing investors to delve deeper and browse further into global online web traffic for CE Transact companies and more. Leverage the unique fusion of web traffic and transaction datasets to understand the addressable market and understand spending behavior on consumer and B2B websites. See the impact of changes in marketing spend, search engine algorithms, and social media awareness on visits to a merchant’s website, and discover the extent to which product mix and pricing drive or hinder visits and dwell time. Plus, Click uncovers a more global view of traffic trends in geographies not covered by Transact. Doubleclick into better forecasting, with Click.

    Consumer Edge’s Click is available in machine-readable file delivery and enables: • Comprehensive Global Coverage: Insights across 620+ brands and 59 countries, including key markets in the US, Europe, Asia, and Latin America. • Integrated Data Ecosystem: Click seamlessly maps web traffic data to CE entities and stock tickers, enabling a unified view across various business intelligence tools. • Near Real-Time Insights: Daily data delivery with a 5-day lag ensures timely, actionable insights for agile decision-making. • Enhanced Forecasting Capabilities: Combining web traffic indicators with transaction data helps identify patterns and predict revenue performance.

    Use Case: Analyze Year Over Year Growth Rate by Region

    Problem A public investor wants to understand how a company’s year-over-year growth differs by region.

    Solution The firm leveraged Consumer Edge Click data to: • Gain visibility into key metrics like views, bounce rate, visits, and addressable spend • Analyze year-over-year growth rates for a time period • Breakout data by geographic region to see growth trends

    Metrics Include: • Spend • Items • Volume • Transactions • Price Per Volume

    Inquire about a Click subscription to perform more complex, near real-time analyses on public tickers and private brands as well as for industries beyond CPG like: • Monitor web traffic as a leading indicator of stock performance and consumer demand • Analyze customer interest and sentiment at the brand and sub-brand levels

    Consumer Edge offers a variety of datasets covering the US, Europe (UK, Austria, France, Germany, Italy, Spain), and across the globe, with subscription options serving a wide range of business needs.

    Consumer Edge is the Leader in Data-Driven Insights Focused on the Global Consumer

  3. n

    Repository Analytics and Metrics Portal (RAMP) 2020 data

    • data.niaid.nih.gov
    • datadryad.org
    zip
    Updated Jul 23, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jonathan Wheeler; Kenning Arlitsch (2021). Repository Analytics and Metrics Portal (RAMP) 2020 data [Dataset]. http://doi.org/10.5061/dryad.dv41ns1z4
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jul 23, 2021
    Dataset provided by
    Montana State University
    University of New Mexico
    Authors
    Jonathan Wheeler; Kenning Arlitsch
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    Version update: The originally uploaded versions of the CSV files in this dataset included an extra column, "Unnamed: 0," which is not RAMP data and was an artifact of the process used to export the data to CSV format. This column has been removed from the revised dataset. The data are otherwise the same as in the first version.

    The Repository Analytics and Metrics Portal (RAMP) is a web service that aggregates use and performance use data of institutional repositories. The data are a subset of data from RAMP, the Repository Analytics and Metrics Portal (http://rampanalytics.org), consisting of data from all participating repositories for the calendar year 2020. For a description of the data collection, processing, and output methods, please see the "methods" section below.

    Methods Data Collection

    RAMP data are downloaded for participating IR from Google Search Console (GSC) via the Search Console API. The data consist of aggregated information about IR pages which appeared in search result pages (SERP) within Google properties (including web search and Google Scholar).

    Data are downloaded in two sets per participating IR. The first set includes page level statistics about URLs pointing to IR pages and content files. The following fields are downloaded for each URL, with one row per URL:

    url: This is returned as a 'page' by the GSC API, and is the URL of the page which was included in an SERP for a Google property.
    impressions: The number of times the URL appears within the SERP.
    clicks: The number of clicks on a URL which took users to a page outside of the SERP.
    clickThrough: Calculated as the number of clicks divided by the number of impressions.
    position: The position of the URL within the SERP.
    date: The date of the search.
    

    Following data processing describe below, on ingest into RAMP a additional field, citableContent, is added to the page level data.

    The second set includes similar information, but instead of being aggregated at the page level, the data are grouped based on the country from which the user submitted the corresponding search, and the type of device used. The following fields are downloaded for combination of country and device, with one row per country/device combination:

    country: The country from which the corresponding search originated.
    device: The device used for the search.
    impressions: The number of times the URL appears within the SERP.
    clicks: The number of clicks on a URL which took users to a page outside of the SERP.
    clickThrough: Calculated as the number of clicks divided by the number of impressions.
    position: The position of the URL within the SERP.
    date: The date of the search.
    

    Note that no personally identifiable information is downloaded by RAMP. Google does not make such information available.

    More information about click-through rates, impressions, and position is available from Google's Search Console API documentation: https://developers.google.com/webmaster-tools/search-console-api-original/v3/searchanalytics/query and https://support.google.com/webmasters/answer/7042828?hl=en

    Data Processing

    Upon download from GSC, the page level data described above are processed to identify URLs that point to citable content. Citable content is defined within RAMP as any URL which points to any type of non-HTML content file (PDF, CSV, etc.). As part of the daily download of page level statistics from Google Search Console (GSC), URLs are analyzed to determine whether they point to HTML pages or actual content files. URLs that point to content files are flagged as "citable content." In addition to the fields downloaded from GSC described above, following this brief analysis one more field, citableContent, is added to the page level data which records whether each page/URL in the GSC data points to citable content. Possible values for the citableContent field are "Yes" and "No."

    The data aggregated by the search country of origin and device type do not include URLs. No additional processing is done on these data. Harvested data are passed directly into Elasticsearch.

    Processed data are then saved in a series of Elasticsearch indices. Currently, RAMP stores data in two indices per participating IR. One index includes the page level data, the second index includes the country of origin and device type data.

    About Citable Content Downloads

    Data visualizations and aggregations in RAMP dashboards present information about citable content downloads, or CCD. As a measure of use of institutional repository content, CCD represent click activity on IR content that may correspond to research use.

    CCD information is summary data calculated on the fly within the RAMP web application. As noted above, data provided by GSC include whether and how many times a URL was clicked by users. Within RAMP, a "click" is counted as a potential download, so a CCD is calculated as the sum of clicks on pages/URLs that are determined to point to citable content (as defined above).

    For any specified date range, the steps to calculate CCD are:

    Filter data to only include rows where "citableContent" is set to "Yes."
    Sum the value of the "clicks" field on these rows.
    

    Output to CSV

    Published RAMP data are exported from the production Elasticsearch instance and converted to CSV format. The CSV data consist of one "row" for each page or URL from a specific IR which appeared in search result pages (SERP) within Google properties as described above. Also as noted above, daily data are downloaded for each IR in two sets which cannot be combined. One dataset includes the URLs of items that appear in SERP. The second dataset is aggregated by combination of the country from which a search was conducted and the device used.

    As a result, two CSV datasets are provided for each month of published data:

    page-clicks:

    The data in these CSV files correspond to the page-level data, and include the following fields:

    url: This is returned as a 'page' by the GSC API, and is the URL of the page which was included in an SERP for a Google property.
    impressions: The number of times the URL appears within the SERP.
    clicks: The number of clicks on a URL which took users to a page outside of the SERP.
    clickThrough: Calculated as the number of clicks divided by the number of impressions.
    position: The position of the URL within the SERP.
    date: The date of the search.
    citableContent: Whether or not the URL points to a content file (ending with pdf, csv, etc.) rather than HTML wrapper pages. Possible values are Yes or No.
    index: The Elasticsearch index corresponding to page click data for a single IR.
    repository_id: This is a human readable alias for the index and identifies the participating repository corresponding to each row. As RAMP has undergone platform and version migrations over time, index names as defined for the previous field have not remained consistent. That is, a single participating repository may have multiple corresponding Elasticsearch index names over time. The repository_id is a canonical identifier that has been added to the data to provide an identifier that can be used to reference a single participating repository across all datasets. Filtering and aggregation for individual repositories or groups of repositories should be done using this field.
    

    Filenames for files containing these data end with “page-clicks”. For example, the file named 2020-01_RAMP_all_page-clicks.csv contains page level click data for all RAMP participating IR for the month of January, 2020.

    country-device-info:

    The data in these CSV files correspond to the data aggregated by country from which a search was conducted and the device used. These include the following fields:

    country: The country from which the corresponding search originated.
    device: The device used for the search.
    impressions: The number of times the URL appears within the SERP.
    clicks: The number of clicks on a URL which took users to a page outside of the SERP.
    clickThrough: Calculated as the number of clicks divided by the number of impressions.
    position: The position of the URL within the SERP.
    date: The date of the search.
    index: The Elasticsearch index corresponding to country and device access information data for a single IR.
    repository_id: This is a human readable alias for the index and identifies the participating repository corresponding to each row. As RAMP has undergone platform and version migrations over time, index names as defined for the previous field have not remained consistent. That is, a single participating repository may have multiple corresponding Elasticsearch index names over time. The repository_id is a canonical identifier that has been added to the data to provide an identifier that can be used to reference a single participating repository across all datasets. Filtering and aggregation for individual repositories or groups of repositories should be done using this field.
    

    Filenames for files containing these data end with “country-device-info”. For example, the file named 2020-01_RAMP_all_country-device-info.csv contains country and device data for all participating IR for the month of January, 2020.

    References

    Google, Inc. (2021). Search Console APIs. Retrieved from https://developers.google.com/webmaster-tools/search-console-api-original.

  4. n

    Data from: Repository Analytics and Metrics Portal (RAMP) 2021 data

    • data.niaid.nih.gov
    • zenodo.org
    • +1more
    zip
    Updated May 23, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jonathan Wheeler; Kenning Arlitsch (2023). Repository Analytics and Metrics Portal (RAMP) 2021 data [Dataset]. http://doi.org/10.5061/dryad.1rn8pk0tz
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 23, 2023
    Dataset provided by
    Montana State University
    University of New Mexico
    Authors
    Jonathan Wheeler; Kenning Arlitsch
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    The Repository Analytics and Metrics Portal (RAMP) is a web service that aggregates use and performance use data of institutional repositories. The data are a subset of data from RAMP, the Repository Analytics and Metrics Portal (http://rampanalytics.org), consisting of data from all participating repositories for the calendar year 2021. For a description of the data collection, processing, and output methods, please see the "methods" section below.

    The record will be revised periodically to make new data available through the remainder of 2021.

    Methods

    Data Collection

    RAMP data are downloaded for participating IR from Google Search Console (GSC) via the Search Console API. The data consist of aggregated information about IR pages which appeared in search result pages (SERP) within Google properties (including web search and Google Scholar).

    Data are downloaded in two sets per participating IR. The first set includes page level statistics about URLs pointing to IR pages and content files. The following fields are downloaded for each URL, with one row per URL:

    url: This is returned as a 'page' by the GSC API, and is the URL of the page which was included in an SERP for a Google property.
    impressions: The number of times the URL appears within the SERP.
    clicks: The number of clicks on a URL which took users to a page outside of the SERP.
    clickThrough: Calculated as the number of clicks divided by the number of impressions.
    position: The position of the URL within the SERP.
    date: The date of the search.
    

    Following data processing describe below, on ingest into RAMP a additional field, citableContent, is added to the page level data.

    The second set includes similar information, but instead of being aggregated at the page level, the data are grouped based on the country from which the user submitted the corresponding search, and the type of device used. The following fields are downloaded for combination of country and device, with one row per country/device combination:

    country: The country from which the corresponding search originated.
    device: The device used for the search.
    impressions: The number of times the URL appears within the SERP.
    clicks: The number of clicks on a URL which took users to a page outside of the SERP.
    clickThrough: Calculated as the number of clicks divided by the number of impressions.
    position: The position of the URL within the SERP.
    date: The date of the search.
    

    Note that no personally identifiable information is downloaded by RAMP. Google does not make such information available.

    More information about click-through rates, impressions, and position is available from Google's Search Console API documentation: https://developers.google.com/webmaster-tools/search-console-api-original/v3/searchanalytics/query and https://support.google.com/webmasters/answer/7042828?hl=en

    Data Processing

    Upon download from GSC, the page level data described above are processed to identify URLs that point to citable content. Citable content is defined within RAMP as any URL which points to any type of non-HTML content file (PDF, CSV, etc.). As part of the daily download of page level statistics from Google Search Console (GSC), URLs are analyzed to determine whether they point to HTML pages or actual content files. URLs that point to content files are flagged as "citable content." In addition to the fields downloaded from GSC described above, following this brief analysis one more field, citableContent, is added to the page level data which records whether each page/URL in the GSC data points to citable content. Possible values for the citableContent field are "Yes" and "No."

    The data aggregated by the search country of origin and device type do not include URLs. No additional processing is done on these data. Harvested data are passed directly into Elasticsearch.

    Processed data are then saved in a series of Elasticsearch indices. Currently, RAMP stores data in two indices per participating IR. One index includes the page level data, the second index includes the country of origin and device type data.

    About Citable Content Downloads

    Data visualizations and aggregations in RAMP dashboards present information about citable content downloads, or CCD. As a measure of use of institutional repository content, CCD represent click activity on IR content that may correspond to research use.

    CCD information is summary data calculated on the fly within the RAMP web application. As noted above, data provided by GSC include whether and how many times a URL was clicked by users. Within RAMP, a "click" is counted as a potential download, so a CCD is calculated as the sum of clicks on pages/URLs that are determined to point to citable content (as defined above).

    For any specified date range, the steps to calculate CCD are:

    Filter data to only include rows where "citableContent" is set to "Yes."
    Sum the value of the "clicks" field on these rows.
    

    Output to CSV

    Published RAMP data are exported from the production Elasticsearch instance and converted to CSV format. The CSV data consist of one "row" for each page or URL from a specific IR which appeared in search result pages (SERP) within Google properties as described above. Also as noted above, daily data are downloaded for each IR in two sets which cannot be combined. One dataset includes the URLs of items that appear in SERP. The second dataset is aggregated by combination of the country from which a search was conducted and the device used.

    As a result, two CSV datasets are provided for each month of published data:

    page-clicks:

    The data in these CSV files correspond to the page-level data, and include the following fields:

    url: This is returned as a 'page' by the GSC API, and is the URL of the page which was included in an SERP for a Google property.
    impressions: The number of times the URL appears within the SERP.
    clicks: The number of clicks on a URL which took users to a page outside of the SERP.
    clickThrough: Calculated as the number of clicks divided by the number of impressions.
    position: The position of the URL within the SERP.
    date: The date of the search.
    citableContent: Whether or not the URL points to a content file (ending with pdf, csv, etc.) rather than HTML wrapper pages. Possible values are Yes or No.
    index: The Elasticsearch index corresponding to page click data for a single IR.
    repository_id: This is a human readable alias for the index and identifies the participating repository corresponding to each row. As RAMP has undergone platform and version migrations over time, index names as defined for the previous field have not remained consistent. That is, a single participating repository may have multiple corresponding Elasticsearch index names over time. The repository_id is a canonical identifier that has been added to the data to provide an identifier that can be used to reference a single participating repository across all datasets. Filtering and aggregation for individual repositories or groups of repositories should be done using this field.
    

    Filenames for files containing these data end with “page-clicks”. For example, the file named 2021-01_RAMP_all_page-clicks.csv contains page level click data for all RAMP participating IR for the month of January, 2021.

    country-device-info:

    The data in these CSV files correspond to the data aggregated by country from which a search was conducted and the device used. These include the following fields:

    country: The country from which the corresponding search originated.
    device: The device used for the search.
    impressions: The number of times the URL appears within the SERP.
    clicks: The number of clicks on a URL which took users to a page outside of the SERP.
    clickThrough: Calculated as the number of clicks divided by the number of impressions.
    position: The position of the URL within the SERP.
    date: The date of the search.
    index: The Elasticsearch index corresponding to country and device access information data for a single IR.
    repository_id: This is a human readable alias for the index and identifies the participating repository corresponding to each row. As RAMP has undergone platform and version migrations over time, index names as defined for the previous field have not remained consistent. That is, a single participating repository may have multiple corresponding Elasticsearch index names over time. The repository_id is a canonical identifier that has been added to the data to provide an identifier that can be used to reference a single participating repository across all datasets. Filtering and aggregation for individual repositories or groups of repositories should be done using this field.
    

    Filenames for files containing these data end with “country-device-info”. For example, the file named 2021-01_RAMP_all_country-device-info.csv contains country and device data for all participating IR for the month of January, 2021.

    References

    Google, Inc. (2021). Search Console APIs. Retrieved from https://developers.google.com/webmaster-tools/search-console-api-original.

  5. g

    Agency Voter Registration Activity | gimi9.com

    • gimi9.com
    Updated Dec 4, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Agency Voter Registration Activity | gimi9.com [Dataset]. https://gimi9.com/dataset/data-gov_agency-voter-registration-activity/
    Explore at:
    Dataset updated
    Dec 4, 2024
    Description

    This dataset captures how many voter registration applications each agency has distributed, how many applications agency staff sent to the Board of Elections, how many staff each agency trained to distribute voter registration applications, whether or not the agency hosts a link to voting.nyc on its website and if so, how many clicks that link received during the reporting period.

  6. A

    ‘Climate Change Dataset’ analyzed by Analyst-2

    • analyst-2.ai
    Updated Dec 13, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Analyst-2 (analyst-2.ai) / Inspirient GmbH (inspirient.com) (2018). ‘Climate Change Dataset’ analyzed by Analyst-2 [Dataset]. https://analyst-2.ai/analysis/kaggle-climate-change-dataset-7e65/4a67af59/?iid=002-150&v=presentation
    Explore at:
    Dataset updated
    Dec 13, 2018
    Dataset authored and provided by
    Analyst-2 (analyst-2.ai) / Inspirient GmbH (inspirient.com)
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Analysis of ‘Climate Change Dataset’ provided by Analyst-2 (analyst-2.ai), based on source dataset retrieved from https://www.kaggle.com/yamqwe/climate-change-datae on 28 January 2022.

    --- Dataset description provided by original source is as follows ---

    About this dataset

    Data from World Development Indicators and Climate Change Knowledge Portal on climate systems, exposure to climate impacts, resilience, greenhouse gas emissions, and energy use.

    In addition to the data available here and through the Climate Data API, the Climate Change Knowledge Portal has a web interface to a collection of water indicators that may be used to assess the impact of climate change across over 8,000 water basins worldwide. You may use the web interface to download the data for any of these basins.

    Here is how to navigate to the water data:

    • Go to the Climate Change Knowledge Portal home page (http://climateknowledgeportal.worldbank.org/)
    • Click any region on the map Click a country In the navigation menu
    • Click "Impacts" and then "Water" Click the map to select a specific water basin
    • Click "Click here to get access to data and indicators" Please be sure to observe the disclaimers on the website regarding uncertainties and use of the water data.

    Attribution: Climate Change Data, World Bank Group.

    World Bank Data Catalog Terms of Use

    Source: http://data.worldbank.org/data-catalog/climate-change

    This dataset was created by World Bank and contains around 10000 samples along with 2009, 1993, technical information and other features such as: - 1994 - Series Code - and more.

    How to use this dataset

    • Analyze 1995 in relation to Scale
    • Study the influence of 1998 on Country Code
    • More datasets

    Acknowledgements

    If you use this dataset in your research, please credit World Bank

    --- Original source retains full ownership of the source dataset ---

  7. g

    OGD Portal: Daily usage by record (since January 2024) | gimi9.com

    • gimi9.com
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    OGD Portal: Daily usage by record (since January 2024) | gimi9.com [Dataset]. https://gimi9.com/dataset/eu_12610-kanton-basel-landschaft
    Explore at:
    Description

    The data on the use of the data sets on the OGD portal BL (data.bl.ch) are collected and published by the specialist and coordination office OGD BL. Contains the day the usage was measured.dataset_title: The title of the dataset_id record: The technical ID of the dataset.visitors: Specifies the number of daily visitors to the record. Visitors are recorded by counting the unique IP addresses that recorded access on the day of the survey. The IP address represents the network address of the device from which the portal was accessed.interactions: Includes all interactions with any record on data.bl.ch. A visitor can trigger multiple interactions. Interactions include clicks on the website (searching datasets, filters, etc.) as well as API calls (downloading a dataset as a JSON file, etc.).RemarksOnly calls to publicly available datasets are shown.IP addresses and interactions of users with a login of the Canton of Basel-Landschaft - in particular of employees of the specialist and coordination office OGD - are removed from the dataset before publication and therefore not shown.Calls from actors that are clearly identifiable as bots by the user agent header are also not shown.Combinations of dataset and date for which no use occurred (Visitors == 0 & Interactions == 0) are not shown.Due to synchronization problems, data may be missing by the day.

  8. P

    News Interactions on Globo.com Dataset

    • paperswithcode.com
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Gabriel de Souza Pereira Moreira; Dietmar Jannach; Adilson Marques da Cunha, News Interactions on Globo.com Dataset [Dataset]. https://paperswithcode.com/dataset/news-interactions-on-globo-com
    Explore at:
    Authors
    Gabriel de Souza Pereira Moreira; Dietmar Jannach; Adilson Marques da Cunha
    Description

    Context This large dataset with users interactions logs (page views) from a news portal was kindly provided by Globo.com, the most popular news portal in Brazil, for reproducibility of the experiments with CHAMELEON - a meta-architecture for contextual hybrid session-based news recommender systems. The source code was made available at GitHub.

    The first version (v1) (download) of this dataset was released for reproducibility of the experiments presented in the following paper:

    Gabriel de Souza Pereira Moreira, Felipe Ferreira, and Adilson Marques da Cunha. 2018. News Session-Based Recommendations using Deep Neural Networks. In 3rd Workshop on Deep Learning for Recommender Systems (DLRS 2018), October 6, 2018, Vancouver, BC, Canada. ACM, New York, NY, USA, 9 pages. https://doi.org/10.1145/3270323.3270328

    A second version (v2) (download) of this dataset was made available for reproducibility of the experiments presented in the following paper. Compared to the v1, the only differences are:

    Included four additional user contextual attributes (click_os, click_country, click_region, click_referrer_type) Removed repeated clicks (clicks in the same articles) within sessions. Those sessions with less than two clicks (minimum for the next-click prediction task) were removed

    Gabriel de Souza Pereira Moreira, Dietmar Jannach, and Adilson Marques da Cunha. 2019. Contextual Hybrid Session-based News Recommendation with Recurrent Neural Networks. arXiv preprint arXiv:1904.10367, 49 pages

    You are not allowed to use this dataset for commercial purposes, only with academic objectives (like education or research). If used for research, please cite the above papers.

    Content The dataset contains a sample of user interactions (page views) in G1 news portal from Oct. 1 to 16, 2017, including about 3 million clicks, distributed in more than 1 million sessions from 314,000 users who read more than 46,000 different news articles during that period.

    It is composed by three files/folders:

    clicks.zip - Folder with CSV files (one per hour), containing user sessions interactions in the news portal. articles_metadata.csv - CSV file with metadata information about all (364047) published articles articles_embeddings.pickle Pickle (Python 3) of a NumPy matrix containing the Article Content Embeddings (250-dimensional vectors), trained upon articles' text and metadata by the CHAMELEON's ACR module (see paper for details) for 364047 published articles. P.s. The full text of news articles could not be provided due to license restrictions, but those embeddings can be used by Neural Networks to represent their content. See this paper for a t-SNE visualization of these embeddings, colored by category.

    Acknowledgements I would like to acknowledge Globo.com for providing this dataset for this research and for the academic community, in special to Felipe Ferreira for preparing the original dataset by Globo.com.

    Dataset banner photo by rawpixel on Unsplash

    Inspiration This dataset might be very useful if you want to implement and evaluate hybrid and contextual news recommender systems, using both user interactions and articles content and metadata to provide recommendations. You might also use it for analytics, trying to understand how users interactions in a news portal are distributed by user, by article, or by category, for example.

    If you are interested in a dataset of user interactions on articles with the full text provided, to experiment with some different text representations using NLP, you might want to take a look in this smaller dataset.

  9. Introduction to static monitoring - PAMGuard tutorial dataset

    • zenodo.org
    bin, zip
    Updated Oct 2, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jamie Macaulay; Jamie Macaulay (2024). Introduction to static monitoring - PAMGuard tutorial dataset [Dataset]. http://doi.org/10.5281/zenodo.13880212
    Explore at:
    zip, binAvailable download formats
    Dataset updated
    Oct 2, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Jamie Macaulay; Jamie Macaulay
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Mar 1, 2018 - May 31, 2018
    Description

    Introduction to static monitoring - PAMGuard tutorial dataset

    GENERAL INFORMATION

    This is a dataset to be used with the PAMGuard tutorial Introduction to static monitoring in PAMGuard. The dataset is from a single autonomous acoustic logger (SoundTrap) recording off the Scottish West Coast and contains a wealth of acoustic data including dolphin whistles and clicks, sonar, baleen whales and porpoises. The tutorial text and other related files can be found on the PAMGuardLearning GitHub page or PAMGuard website.

    Year of data collection:

    2018

    Geographic location of data collection:

    Stanton Banks, Scotland

    Funding sources that supported the collection of the data:

    The COMPASS project has been supported by the EU’s INTERREG VA Programme, managed by the Special EU Programmes Body. The views and opinions expressed in this document do not necessarily reflect those of the European Commission or the Special EU Programmes Body (SEUPB).

    Recommended citation for this dataset:

    Risch, D., Quer, S., Edwards, E., Beck, S., Macaulay, J., Calderan, S. (2018). Acoustic data from the Scottish west coast recorded with a single-element recording unit doi: 10.5281/zenodo.13880212

    DATA & FILE OVERVIEW

    Description of dataset

    The dataset contains three days of sample recordings from a deployment of an acoustic recording device (SoundTrap ST300) with a battery pack. The SoundTrap was running an on-device click detector on data at 576kHz sample rate and saving 96kHz raw acoustic data. The click detector detects any transient sound and saves a snippet of the waveform.

    The dataset also contains processed data from the entire deployment. There data have been processed in PAMGuard software (www.pamguard.org) for low frequency moans, dolphin whistles, clicks noise and longterm spectral averages. The detection and soundscape data has been saved in and raw acoustic recordings discarded.

    Files

    There are two directories - _audio_ and _viewer_. *audio* contains .sud files whicha re compressed audio and metadata files. They contain raw audio, click detection data and metadata. Data from sud files can be decompressed using SoundTrap Host software (www.oceaninstruments.co.nz).

    The _viewer_ folder contains a PAMGuard database and associated detection files in the _PAMBinary_ folder and sub folders. The database contains PAMGuard settings and some basic metadata. The detection .pgdf files are non human readable files that contain detection data such as detected clicks, frequency contours of whistles, moans and other tonal sounds and a time series of soundscape metrics. These files can be opened with PAMGuard software (www.pamguard.org), MATLAB (https://github.com/PAMGuard/PAMGuardMatlab) and R (https://github.com/TaikiSan21/PamBinaries).

    The folder structure is as follows

    ├── audio #Three example days of acoustic recordings
    │ ├── D2_SB_189411
    │ ├── D2_SB_189415
    │ └── D2_SB_189428
    ├── README.md #The readme file
    └── viewer #Processed data from a whole deployment
    ├── compass_database_D2_stanton.sqlite3 #PAMGuard database
    └── PAMBinary
    ├── 20180301 #Folders containing detection files
    ├── 20180302
    ├── '''

    SHARING AND ACCESS

    These data are open source under Creative Commons Attribution 4.0 International. This means that these data can be used in other tutorials as long as the original authors are credited. Using these data for scientific purposes requires the permission of the authors.

  10. P

    How Do I Contact "Bitdefender Customer Service"? A Simple Guide Dataset

    • paperswithcode.com
    Updated Jun 18, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). How Do I Contact "Bitdefender Customer Service"? A Simple Guide Dataset [Dataset]. https://paperswithcode.com/dataset/how-do-i-contact-bitdefender-customer-service
    Explore at:
    Dataset updated
    Jun 18, 2025
    Description

    Click Here : Bitdefender Customer Service

    =================================================================================

    In today's fast-paced digital world, one of the most critical things for people and organizations to do is keep their cyber security up to date. Bitdefender is a well-known firm that provides strong antivirus and internet security software. Like with any other service or product, users may have problems or questions about their subscriptions, features, billing, or installation. At this point, Bitdefender's customer service is a very significant aspect of the support system. This complete guide is called "How Do I Get in Touch with 'Bitdefender Customer Service'?" This easy-to-follow article will show you how to get in touch with Bitdefender's support team in a few different ways so you can obtain fast, useful, and skilled help.

    Understanding How Important Bitdefender Customer Service Is When it comes to cybersecurity services, customer service is highly vital for keeping people pleased. Both new and long-time customers may have problems that pop up out of the blue. These can be issues with installation, activation keys, system compatibility, payment, or security. Bitdefender has a number of help options that are tailored to these situations. If you know how to reach them customer care, you may get your problems fixed quickly and with as little hassle as possible.

    Here are some things you should know before you call Bitdefender Support. You may speed up the process by doing a few things before you call Bitdefender's customer service. Be ready with the following information:

    Peacock Tv Login Peacock Tv Sign in Bitdefender Login Account Bitdefender Sign in Account Norton Login Norton Sign in

    The email address for your Bitdefender account

    Your Bitdefender Central login details

    The key or code that allows you utilize your product

    The device or operating system that is having the difficulty

    A full explanation of the problem or error message you are getting

    Being ready implies that the support crew can help you right away without having to call you back several times.

    First, you need go to Bitdefender Central. When you think, "How do I reach 'Bitdefender Customer Service'?" First, you need to go to Bitdefender Central. This online dashboard lets you keep track of your account, installations, devices, and subscriptions. You can also use customer assistance options like live chat, sending tickets, and articles that help you fix difficulties.

    You may get to Bitdefender Central by signing into your account on the Bitdefender website. To get to the "Support" area, which is normally near the bottom of the dashboard, click on it. Here you may discover a number of useful articles, video lectures, and ways to get in touch with us.

    Chat Support: Talk to a Bitdefender employee right away for help One of the fastest and easiest ways to reach Bitdefender customer service is through live chat. You can get this tool from Bitdefender Central and talk to a live person in real time. The chat service is there to assist you fix problems right away, whether they have to do with your account or technology.

    To start a chat session, click the "Contact Support" or "Chat with an Expert" button. Once you get in touch, explain your situation in detail and follow the support person's instructions. This is the simplest way to deal with issues that need to be repaired fast but aren't too hard.

    Email Support: For Help That Is Thorough and Well-Documented Email support is another useful option if you need to send in papers or give detailed explanations. On Bitdefender's Central platform, people can make a support ticket. This choice is appropriate for hard situations like disputed charges, license transfers, or technical problems that keep coming up and need more support.

    To put in a support ticket, go to the Bitdefender Central customer service page, fill out the form, explain your problem, and attach any files that are important. If your problem is simple, a representative will usually come back to you within a few hours to a day.

    Phone Support: Get in touch with a Bitdefender Agent Sometimes, the best and most reassuring thing to do is to call customer service right away. In some places, Bitdefender offers free phone support, which enables users clearly explain their concerns and get speedy solutions.

    You can find the relevant phone number for your country on the Bitdefender Contact page. The wait periods may be greater or shorter depending on how busy it is, but the agents are ready to answer any question, from minor problems to more complicated security issues.

    Websites and forums for the community If you want to fix problems on your own or learn more before talking to a professional, the Bitdefender Community Forum is a fantastic place to go. This platform lets users and official moderators speak about items and give advice, fixes, and information on software.

    The Knowledge Base section is another wonderful way to get in-depth information, answers to common questions, and step-by-step guides. A lot of people get answers here without having to call customer service.

    Help with Bitdefender for Business Users You might need more specific advice if your firm uses Bitdefender GravityZone or other corporate solutions. Business users can access dedicated enterprise help through the GravityZone portal. Enterprise users can report issues, start conversations, and seek for more help that is tailored to their security and infrastructure needs.

    Most business accounts come with account managers or technical support teams who can aid with deployment, integration, and ways to deal with threats in real time.

    How to Fix Common Problems Before Calling Support How to contact "Bitdefender Customer Service" "A Simple Guide" also tells you when you might not need to get in touch with them at all. You can fix a number of common problems on your own with Bitdefender. For example:

    Installation problems: Downloading the full offline installer generally cures the problem.

    Activation errors happen when the license numbers are inaccurate or the subscription has run out.

    Problems with performance can usually be fixed by changing the scan schedule or updating the program.

    The "My Subscriptions" option in Bitdefender Central makes it easy to deal with billing problems.

    Using these tools can save you time and cut down on the number of times you have to call customer service.

    What Remote Help Does for Tech Issues Bitdefender can also aid you with problems that are tougher to fix from a distance. You will need to install a remote access tool so that the technician can take control of your system and fix the problem themselves after you set up a time to chat to a support agent. This is especially useful for those who aren't very good with technology or for firms that have multiple levels of protection.

    Remote help makes sure that problems are handled in a competent way and gives you peace of mind that your digital security is still safe.

    How to Keep Bitdefender Safe and Up to Date Doing regular maintenance is one of the easiest ways to cut down on the need for customer service. You need to update your Bitdefender program on a regular basis to acquire the latest security updates, malware definitions, and functionality upgrades. To avoid compatibility issues, make sure that your operating system and any third-party software you use are also up to date.

    Regular scans, avoiding suspicious websites, and checking the Bitdefender dashboard for alerts will help keep your system safe and minimize the chances that you'll require support right away.

    What Bitdefender Mobile App Support can do You can also get support from the Bitdefender app on your Android or iOS device. The mobile interface lets you manage your devices, renew your membership, and even talk to customer care directly from your phone. This can be quite helpful for folks who need support while they're on the go or who are experiencing trouble with their phone, such setting up a VPN or parental controls.

    Keeping consumer data and conversation private Bitdefender keeps its clients' privacy very high when they talk to them. There are strict laws about privacy and data protection for all kinds of contact, such as phone calls, emails, chats, and remote help. When you need to get in touch with customer service, always utilize real means. Don't give out personal information unless the help process requires you to.

    Final Thoughts on How to Contact Bitdefender Customer Service Bitdefender's customer service is designed to help you with any issue, whether it's a technical problem, a query about a payment, or just a desire for guidance, swiftly, clearly, and professionally. Being able to contact someone, have the proper information ready, and choosing the best route to obtain help can make a great difference in how you feel about the whole thing.

  11. 4

    Multimodal WEDAR dataset for attention regulation behaviors, self-reported...

    • data.4tu.nl
    zip
    Updated May 9, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yoon Lee; Marcus Specht (2023). Multimodal WEDAR dataset for attention regulation behaviors, self-reported distractions, reaction time, and knowledge gain in e-reading [Dataset]. http://doi.org/10.4121/8f730aa3-ad04-4419-8a5b-325415d2294b.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 9, 2023
    Dataset provided by
    4TU.ResearchData
    Authors
    Yoon Lee; Marcus Specht
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Diverse learning theories have been constructed to understand learners' internal states through various tangible predictors. We focus on self-regulatory actions that are subconscious and habitual actions triggered by behavior agents' 'awareness' of their attention loss. We hypothesize that self-regulatory behaviors (i.e., attention regulation behaviors) also occur in e-reading as 'regulators' as found in other behavior models (Ekman, P., & Friesen, W. V., 1969). In this work, we try to define the types and frequencies of attention regulation behaviors in e-reading. We collected various cues that reflect learners' moment-to-moment and page-to-page cognitive states to understand the learners' attention in e-reading.

    The text 'How to make the most of your day at Disneyland Resort Paris' has been implemented on a screen-based e-reader, which we developed in a pdf-reader format. An informative, entertaining text was adopted to capture learners' attentional shifts during knowledge acquisition. The text has 2685 words, distributed over ten pages, with one subtopic on each page. A built-in webcam on Mac Pro and a mouse have been used for the data collection, aiming for real-world implementation only with essential computational devices. A height-adjustable laptop stand has been used to compensate for participants' eye levels.

    Thirty learners in higher education have been invited for a screen-based e-reading task (M=16.2, SD=5.2 minutes). A pre-test questionnaire with ten multiple-choice questions was given before the reading to check their prior knowledge level about the topic. There was no specific time limit to finish the questionnaire. We collected cues that reflect learners' moment-to-moment and page-to-page cognitive states to understand the learners' attention in e-reading. Learners were asked to report their distractions on two levels during the reading: 1) In-text distraction (e.g., still reading the text with low attentiveness) or 2) out-of-text distraction (e.g., thinking of something else while not reading the text anymore). We implemented two noticeably-designed buttons on the right-hand side of the screen interface to minimize possible distraction from the reporting task. After triggering a new page, we implemented blur stimuli on the text in the random range of 20 seconds. It ensures that the blur stimuli occur at least once on each page. Participants were asked to click the de-blur button on the text area of the screen to proceed with the reading. The button has been implemented in the whole text area, so participants can minimize the effort to find and click the button. Reaction time for de-blur has been measured, too, to grasp the arousal of learners during the reading. We asked participants to answer pre-test and post-test questionnaires about the reading material. Participants were given ten multiple-choice questions before the session, while the same set of questions was given after the reading session (i.e., formative questions) with added subtopic summarization questions (i.e., summative questions). It can provide insights into the quantitative and qualitative knowledge gained through the session and different learning outcomes based on individual differences. A video dataset of 931,440 frames has been annotated with the attention regulator behaviors using an annotation tool that plays the long sequence clip by clip, which contains 30 frames. Two annotators (doctoral students) have done two stages of labeling. In the first stage, the annotators were trained on the labeling criteria and annotated the attention regulator behaviors separately based on their judgments. The labels were summarized and cross-checked in the second round to address the inconsistent cases, resulting in five attention regulation behaviors and one neutral state. See WEDAR_readme.csv for detailed descriptions of features.

    The dataset has been uploaded 1) raw data, which has formed as we collected, and 2) preprocessed, that we extracted useful features for further learning analytics based on real-time and post-hoc data.

    Reference

    Ekman, P., & Friesen, W. V. (1969). The repertoire of nonverbal behavior: Categories, origins, usage, and coding. semiotica, 1(1), 49-98.

  12. P

    How to Login Peacock TV Account? Dataset

    • paperswithcode.com
    Updated Jun 17, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). How to Login Peacock TV Account? Dataset [Dataset]. https://paperswithcode.com/dataset/how-to-login-peacock-tv-account
    Explore at:
    Dataset updated
    Jun 17, 2025
    Description

    For Login Peacock TV Please Visit: 👉 Peacock TV Login Account

    As streaming platforms continue to dominate home entertainment, Peacock TV has emerged as one of the most talked-about services. Whether you're catching up on hit NBC shows, streaming live sports, or diving into exclusive originals, a Peacock TV login account is your gateway to a vast world of content. In this article, we'll walk you through what a Peacock TV login account is, how to create and manage one, and why it's worth your time.

    What Is a Peacock TV Login Account? A Peacock TV login account is your personal credential that allows access to Peacock’s streaming service. Developed by NBCUniversal, Peacock TV offers a mix of free and premium content, including movies, TV shows, news, and live sports. Creating an account is the first step to exploring what the platform has to offer. Your login credentials—typically an email and password—grant you access to features like watchlists, personalized recommendations, and subscription management.

    How to Create a Peacock TV Login Account Creating a Peacock TV login account is simple and takes only a few minutes. Here's a step-by-step guide:

    Visit the Official Peacock TV Website or App Start by navigating to the Peacock TV app or website. The platform is available on most smart devices, including smartphones, tablets, smart TVs, and gaming consoles.

    Sign Up for a New Account Click on the "Sign Up" or "Join Now" button. You’ll be asked to provide your email address, create a password, and enter basic personal information like your name and birthdate.

    Choose Your Plan Peacock TV offers several tiers, including a free plan with limited access and premium plans with more content and fewer ads. After selecting your plan, you can enter payment details if needed.

    Verify Your Email Address You’ll likely receive a verification email to confirm your new Peacock TV login account. Click the link provided to activate your account fully.

    Once your account is set up, you can log in from any supported device using your email and password.

    Logging In to Your Peacock TV Account Accessing your Peacock TV login account is straightforward:

    Open the Peacock TV app or website.

    Click on “Sign In.”

    Enter the email address and password associated with your account.

    Click “Continue,” and you’ll be taken to the homepage.

    From here, you can browse categories, stream content, and manage your account settings.

    What to Do if You Can’t Access Your Peacock TV Login Account If you’re having trouble accessing your Peacock TV login account, here are a few quick fixes:

    Forgot Password: Use the “Forgot Password?” link to receive a reset email.

    Wrong Email: Double-check that you’re using the correct email address.

    Account Locked: Too many failed login attempts can temporarily lock your account. Wait a few minutes before trying again or reset your password.

    Subscription Issues: If you’ve recently changed or canceled your plan, it could affect your ability to log in. Check your email for any billing or account alerts from Peacock.

    Benefits of Having a Peacock TV Login Account There are several advantages to having a Peacock TV login account:

    Customized Experience: Your account allows Peacock to recommend shows and movies based on your viewing history.

    Cross-Device Access: Log in on multiple devices and pick up right where you left off.

    Watchlist and Favorites: Easily save your favorite shows for later.

    Premium Features: If you subscribe to a paid plan, your login gives you access to exclusive content and live sports like Premier League soccer or WWE events.

    Managing Your Peacock TV Login Account Keeping your account information up-to-date is crucial for uninterrupted streaming. You can manage your Peacock TV login account through the “Account” section:

    Update Password: Regularly change your password for security.

    Manage Subscriptions: Upgrade, downgrade, or cancel your plan at any time.

    Payment Details: Update billing information when necessary.

    Parental Controls: Set content restrictions if kids are using the same login.

    Is the Peacock TV Login Account Secure? Security is a top priority for Peacock TV. Your Peacock TV login account is protected with industry-standard encryption, and you can take additional steps to secure your account:

    Use a strong, unique password.

    Enable two-factor authentication if available.

    Avoid sharing your login credentials with others.

    If you ever suspect unauthorized activity, contact Peacock customer support immediately and change your password.

    Conclusion In a crowded streaming landscape, Peacock TV stands out by offering a blend of free and premium content. Creating a Peacock TV login account is your first step toward enjoying top-tier entertainment, whether you're into classic sitcoms, new original series, or live sports. With user-friendly features and a variety of plans, Peacock makes it easy to find something for everyone in the household. So if you haven't already, sign up today and explore everything Peacock TV has to offer.

  13. Worldwide Soundscapes project meta-data

    • zenodo.org
    Updated Dec 9, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kevin F.A. Darras; Kevin F.A. Darras; Rodney Rountree; Rodney Rountree; Steven Van Wilgenburg; Steven Van Wilgenburg; Amandine Gasc; Amandine Gasc; 松海 李; 松海 李; 黎君 董; 黎君 董; Yuhang Song; Youfang Chen; Youfang Chen; Thomas Cherico Wanger; Thomas Cherico Wanger; Yuhang Song (2022). Worldwide Soundscapes project meta-data [Dataset]. http://doi.org/10.5281/zenodo.7415473
    Explore at:
    Dataset updated
    Dec 9, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Kevin F.A. Darras; Kevin F.A. Darras; Rodney Rountree; Rodney Rountree; Steven Van Wilgenburg; Steven Van Wilgenburg; Amandine Gasc; Amandine Gasc; 松海 李; 松海 李; 黎君 董; 黎君 董; Yuhang Song; Youfang Chen; Youfang Chen; Thomas Cherico Wanger; Thomas Cherico Wanger; Yuhang Song
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Worldwide Soundscapes project is a global, open inventory of spatio-temporally replicated soundscape datasets. This Zenodo entry comprises the data tables that constitute its (meta-)database, as well as their description.

    The overview of all sampling sites can be found on the corresponding project on ecoSound-web, as well as a demonstration collection containing selected recordings. More information on the project can be found here and on ResearchGate.

    The audio recording criteria justifying inclusion into the meta-database are:

    • Stationary (no transects, towed sensors or microphones mounted on cars)
    • Passive (unattended, no human disturbance by the recordist)
    • Ambient (no spatial or temporal focus on a particular species or direction)
    • Spatially and/or temporally replicated (multiple sites sampled at least at one common daytime or multiple days sampled at least in one common site)

    The individual columns of the provided data tables are described in the following. Data tables are linked through primary keys; joining them will result in a database.

    datasets

    • dataset_id: incremental integer, primary key
    • name: name of the dataset. if it is repeated, incremental integers should be used in the "subset" column to differentiate them.
    • subset: incremental integer that can be used to distinguish datasets with identical names
    • collaborators: full names of people deemed responsible for the dataset, separated by commas
    • contributors: full names of people who are not the main collaborators but who have significantly contributed to the dataset, and who could be contacted for in-depth analyses, separated by commas.
    • date_added: when the datased was added (DD/MM/YYYY)
    • URL_open_recordings: if recordings (even only some) from this dataset are openly available, indicate the internet link where they can be found.
    • URL_project: internet link for further information about the corresponding project
    • DOI_publication: DOI of corresponding publications, separated by comma
    • core_realm_IUCN: The core realm of the dataset. Datasets may have multiple realms, but the main one should be listed. Datasets may contain sampling sites from different realms in the "sites" sheet. IUCN Global Ecosystem Typology (v2.0): https://global-ecosystems.org/
    • medium: the physical medium the microphone is situated in
    • protected_area: Whether the sampling sites were situated in protected areas or not, or only some.
    • GADM0: For datasets on land or in territorial waters, Global Administrative Database level0
      https://gadm.org/
    • GADM1: For datasets on land or in territorial waters, Global Administrative Database level1
      https://gadm.org/
    • GADM2: For datasets on land or in territorial waters, Global Administrative Database level2
      https://gadm.org/
    • IHO: For marine locations, the sea area that encompassess all the sampling locations according to the International Hydrographic Organisation. Map here: https://www.arcgis.com/home/item.html?id=44e04407fbaf4d93afcb63018fbca9e2
    • locality: optional free text about the locality
    • latitude_numeric_region: study region approximate centroid latitude in WGS84 decimal degrees
    • longitude_numeric_region: study region approximate centroid longitude in WGS84 decimal degrees
    • sites_number: number of sites sampled
    • year_start: starting year of the sampling
    • year_end: ending year of the sampling
    • deployment_schedule: description of the sampling schedule, provisional
    • temporal_recording_selection: list environmental exclusion criteria that were used to determine which recording days or times to discard
    • high_pass_filter_Hz: frequency of the high-pass filter of the recorder, in Hz
    • variable_sampling_frequency: Does the sampling frequency vary? If it does, write "NA" in the sampling_frequency_kHz column and indicate it in the sampling_frequency_kHz column inside the deployments sheet
    • sampling_frequency_kHz: frequency the microphone was sampled at (sounds of half that frequency will be recorded)
    • variable_recorder:
    • recorder: recorder model used
    • microphone: microphone used
    • freshwater_recordist_position: position of the recordist relative to the microphone during sampling (only for freshwater)
    • collaborator_comments: free-text field for comments by the collaborators
    • validated: This cell is checked if the contents of all sheets are complete and have been found to be coherent and consistent with our requirements.
    • validator_name: name of person doing the validation
    • validation_comments: validators: please insert the date when someone was contacted
    • cross-check: this cell is checked if the collaborators confirm the spatial and temporal data after checking the corresponding site maps, deployment and operation time graphs found at https://drive.google.com/drive/folders/1qfwXH_7dpFCqyls-c6b8RZ_fbcn9kXbp?usp=share_link

    datasets-sites

    • dataset_ID: primary key of datasets table
    • dataset_name: lookup field
    • site_ID: primary key of sites table
    • site_name: lookup field

    sites

    • site_ID: unique site IDs, larger than 1000 for compatibility with ecoSound-web
    • site_name: name or code of sampling site as used in respective projects
    • latitude_numeric: exact numeric degrees coordinates of latitude
    • longitude_numeric: exact numeric degrees coordinates of longitude
    • topography_m: for sites on land: elevation. For marine sites: depth (negative). in meters
    • freshwater_depth_m
    • realm: Ecosystem type according to IUCN GET https://global-ecosystems.org/
    • biome: Ecosystem type according to IUCN GET https://global-ecosystems.org/
    • functional_group: Ecosystem type according to IUCN GET https://global-ecosystems.org/
    • comments

    deployments

    • dataset_ID: primary key of datasets table
    • dataset_name: lookup field
    • deployment: use identical subscript letters to denote rows that belong to the same deployment. For instance, you may use different operation times and schedules for different target taxa within one deployment.
    • start_date_min: earliest date of deployment start, double-click cell to get date-picker
    • start_date_max: latest date of deployment start, if applicable (only used when recorders were deployed over several days), double-click cell to get date-picker
    • start_time_mixed: deployment start local time, either in HH:MM format or a choice of solar daytimes (sunrise, sunset, noon, midnight). Corresponds to the recording start time for continuous recording deployments. If multiple start times were used, you should mention the latest start time (corresponds to the earliest daytime from which all recorders are active). If applicable, positive or negative offsets from solar times can be mentioned (For example: if data are collected one hour before sunrise, this will be "sunrise-60")
    • permanent: is the deployment permanent (in which case it would be ongoing and the end date or duration would be unknown)?
    • variable_duration_days: is the duration of the deployment variable? in days
    • duration_days: deployment duration per recorder (use the minimum if variable)
    • end_date_min: earliest date of deployment end, only needed if duration is variable, double-click cell to get date-picker
    • end_date_max: latest date of deployment end, only needed if duration is variable, double-click cell to get date-picker
    • end_time_mixed: deployment end local time, either in HH:MM format or a choice of solar daytimes (sunrise, sunset, noon, midnight). Corresponds to the recording end time for continuous recording deployments.
    • recording_time: does the recording last from the deployment start time to the end time (continuous) or at scheduled daily intervals (scheduled)? Note: we consider recordings with duty cycles to be continuous.
    • operation_start_time_mixed: scheduled recording start local time, either in HH:MM format or a choice of solar daytimes (sunrise, sunset, noon, midnight). If applicable, positive or negative offsets from solar times can be mentioned (For example: if data are collected one hour before sunrise, this will be "sunrise-60")
    • operation_duration_minutes: duration of operation in minutes, if constant
    • operation_end_time_mixed: scheduled recording end local time, either in HH:MM format or a choice of solar daytimes (sunrise, sunset, noon, midnight). If applicable, positive or negative offsets from solar times can be mentioned (For example: if data are collected one hour before sunrise, this will be "sunrise-60")
    • duty_cycle_minutes: duty cycle of the recording (i.e. the fraction of minutes when it is recording), written as "recording(minutes)/period(minutes)". For example: "1/6" if the recorder is active for 1 minute and standing by for 5 minutes.
    • sampling_frequency_kHz: only indicate the sampling frequency if it is variable within a particular dataset so that we need to code different frequencies for different deployments
    • recorder
    • subset_sites: If the deployment was not done in all the sites of the

  14. P

    How to Login My Spectrum Account? | A Complete Guide Dataset

    • paperswithcode.com
    Updated Jun 17, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). How to Login My Spectrum Account? | A Complete Guide Dataset [Dataset]. https://paperswithcode.com/dataset/how-to-login-my-spectrum-account-a-complete
    Explore at:
    Dataset updated
    Jun 17, 2025
    Description

    (Toll Free) Number +1-341-900-3252 In today's digital world, managing your (Toll Free) Number +1-341-900-3252 services online has become not just a convenience but a necessity. Spectrum, (Toll Free) Number +1-341-900-3252 one of the largest internet, TV, and phone service providers in the United States, offers a user-friendly platform (Toll Free) Number +1-341-900-3252 to manage all your services through the Spectrum Login My Account portal. Whether you're checking your bill, updating account settings, or troubleshooting (Toll Free) Number +1-341-900-3252 service issues, your Spectrum account makes it easy to take control of your services from anywhere, anytime.

    (Toll Free) Number +1-341-900-3252

    This article provides everything you need to know about accessing, managing, and troubleshooting your Spectrum Login My Account portal.

    What Is Spectrum Login My Account? Spectrum Login My Account is an online account management system that allows Spectrum customers to handle nearly every aspect of their subscription. Once you sign in, you can manage your internet, TV, and phone services all from one centralized dashboard.

    With a Spectrum login, customers can:

    View and pay bills

    Check service status and outages

    Manage Wi-Fi and equipment

    Upgrade or change services

    Access email and voicemail

    Contact support and request service changes

    (Toll Free) Number +1-341-900-3252

    Having access to the Spectrum Login My Account system saves time and provides greater flexibility in managing your services.

    How to Create Your Spectrum Login Account If you’re a new customer or haven’t created an account yet, setting up access is simple. Here’s how to get started with Spectrum Login My Account:

    Go to the Spectrum website and select “Create a Username.”

    Enter your account information, such as your account number, phone number, or email address associated with your Spectrum services.

    Verify your identity using a confirmation code sent to your phone or email.

    Create your username and password, and set up security questions for added protection.

    Log in to your account using your new credentials.

    Once registered, you can log in anytime to manage your account details, services, and payments.

    (Toll Free) Number +1-341-900-3252

    How to Access Spectrum Login My Account Accessing Spectrum Login My Account can be done through various devices:

    Desktop/Laptop: Go to the official Spectrum website and click on “Sign In.” Enter your username and password to access your dashboard.

    Mobile App: Download the My Spectrum App from the Apple App Store or Google Play Store. Sign in using your credentials and manage your account on the go.

    Tablet or Smart Device: Use the browser or the app for easy access to the account management tools.

    Key Features of Spectrum Login My Account 1. Billing and Payments With your Spectrum Login My Account, you can:

    View your current and past bills

    Set up automatic payments

    (Toll Free) Number +1-341-900-3252

    Make one-time payments

    Enroll in paperless billing

    Receive billing alerts

    Managing your finances is easier when you can do everything online in just a few clicks.

    Service Management You can modify or upgrade your current Spectrum services directly from the dashboard. Whether you want to add premium TV channels, upgrade your internet speed, or change your phone plan, it’s all accessible through your account.

    (Toll Free) Number +1-341-900-3252

    Equipment and Wi-Fi Management The account dashboard lets you:

    Check the status of your modem and router

    Restart your equipment remotely

    View connected devices

    Change your Wi-Fi network name and password

    (Toll Free) Number +1-341-900-3252

    Troubleshooting and Support If you're having technical issues, the Spectrum Login My Account portal provides real-time support tools, including:

    Service outage updates

    Self-service troubleshooting steps

    Online chat with support representatives

    Scheduling technician appointments

    Common Login Issues and How to Resolve Them While the Spectrum Login My Account portal is generally reliable, users may occasionally experience issues. Here are some common problems and quick solutions:

    Forgotten Username or Password Use the “Forgot Username or Password” link on the login page. You’ll be guided through the steps to recover or reset your credentials using your registered email or phone number.

    (Toll Free) Number +1-341-900-3252

    Locked Account Too many incorrect login attempts may lock your account. In this case, wait a few minutes and try again, or reset your password to regain access.

    (Toll Free) Number +1-341-900-3252

    Browser or App Problems If you can’t log in, try clearing your browser cache or updating the app. Switching to a different device or browser often solves the issue as well.

    Tips for Keeping Your Spectrum Account Secure Since your Spectrum Login Account portal includes sensitive information such as billing and service data, it’s important to keep it secure:

    Use a strong, unique password with a mix of letters, numbers, and symbols.

    Don’t share your login credentials with anyone.

    Enable two-factor authentication if prompted.

    Log out of your account after using public or shared devices.

    Regularly review account activity to ensure no unauthorized access.

    Why Spectrum Login My Account Is Useful Using the Spectrum Login My Account portal simplifies how you interact with your service provider. Instead of calling customer service or visiting a store, you can manage almost everything from your computer or mobile device.

    Whether you're moving to a new address, changing your plan, or paying your monthly bill, the online dashboard provides a quick and hassle-free way to handle your services.

    (Toll Free) Number +1-341-900-3252

    Final Thoughts Managing your internet, TV, and phone services has never been easier, thanks to Spectrum Login My Account. With 24/7 access (Toll Free) Number +1-341-900-3252 , secure features, and tools for troubleshooting, the portal puts the power back in your hands. Whether you’re on the go or at home, having access to your Spectrum account lets you stay in control, stay connected, and stay informed.

  15. c

    University of Bristol Data Repository

    • catalog.civicdataecosystem.org
    Updated Apr 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). University of Bristol Data Repository [Dataset]. https://catalog.civicdataecosystem.org/dataset/university-of-bristol-data-repository
    Explore at:
    Dataset updated
    Apr 22, 2025
    Area covered
    Bristol
    Description

    The data.bris Research Data Repository is maintained by the University of Bristol Research Data Service. The Service is a Library-led collaboration with IT Services and RED. The data.bris Research Data Repository is an online digital repository of multi-disciplinary research datasets produced at the University of Bristol. Data published through the repository are all openly available under a Non-Commercial Government Licence for public sector information, and each deposit is assigned a unique Digital Object Identifier (DOI). If you have discovered material that you believe to be a potential breach of copyright, or a violation of any law (including but not limited to laws on copyright, patent, trademark, confidentiality, data protection, obscenity, defamation or libel), please refer to the Research Data Repository Notice and Take Down Policy. Some deposits consist of a great number of individual files. For this reason the Repository has adopted an organisational principle similar to the familiar folder/sub-folder/file approach. Listed at the bottom of each dataset page are either ‘Data Resources’ or ‘Sub-levels’. Data Resources are files associated with the dataset (you can download them from the dataset page or click ‘Explore’ then ’More information’ to see the file’s own deposit information). Where a depositor has used folders, you will be presented with a list of Sub-levels, which are synonymous with ‘folders’ and ‘sub-folders’. By clicking on a sub-level you can ‘drill down’ into the dataset and will either be presented with a list of Data Resources (associated files), further Sub-levels, or a mixture of both. All sub-levels are shown on the top level page. You can elect to go ‘Up one level’ or go ‘Back to top level’ (back to the initial deposit record) at any stage.

  16. n

    Repository Analytics and Metrics Portal (RAMP) 2018 data

    • data.niaid.nih.gov
    • dataone.org
    • +2more
    zip
    Updated Jul 27, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jonathan Wheeler; Kenning Arlitsch (2021). Repository Analytics and Metrics Portal (RAMP) 2018 data [Dataset]. http://doi.org/10.5061/dryad.ffbg79cvp
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jul 27, 2021
    Dataset provided by
    Montana State University
    University of New Mexico
    Authors
    Jonathan Wheeler; Kenning Arlitsch
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    The Repository Analytics and Metrics Portal (RAMP) is a web service that aggregates use and performance use data of institutional repositories. The data are a subset of data from RAMP, the Repository Analytics and Metrics Portal (http://rampanalytics.org), consisting of data from all participating repositories for the calendar year 2018. For a description of the data collection, processing, and output methods, please see the "methods" section below. Note that the RAMP data model changed in August, 2018 and two sets of documentation are provided to describe data collection and processing before and after the change.

    Methods

    RAMP Data Documentation – January 1, 2017 through August 18, 2018

    Data Collection

    RAMP data were downloaded for participating IR from Google Search Console (GSC) via the Search Console API. The data consist of aggregated information about IR pages which appeared in search result pages (SERP) within Google properties (including web search and Google Scholar).

    Data from January 1, 2017 through August 18, 2018 were downloaded in one dataset per participating IR. The following fields were downloaded for each URL, with one row per URL:

    url: This is returned as a 'page' by the GSC API, and is the URL of the page which was included in an SERP for a Google property.
    impressions: The number of times the URL appears within the SERP.
    clicks: The number of clicks on a URL which took users to a page outside of the SERP.
    clickThrough: Calculated as the number of clicks divided by the number of impressions.
    position: The position of the URL within the SERP.
    country: The country from which the corresponding search originated.
    device: The device used for the search.
    date: The date of the search.
    

    Following data processing describe below, on ingest into RAMP an additional field, citableContent, is added to the page level data.

    Note that no personally identifiable information is downloaded by RAMP. Google does not make such information available.

    More information about click-through rates, impressions, and position is available from Google's Search Console API documentation: https://developers.google.com/webmaster-tools/search-console-api-original/v3/searchanalytics/query and https://support.google.com/webmasters/answer/7042828?hl=en

    Data Processing

    Upon download from GSC, data are processed to identify URLs that point to citable content. Citable content is defined within RAMP as any URL which points to any type of non-HTML content file (PDF, CSV, etc.). As part of the daily download of statistics from Google Search Console (GSC), URLs are analyzed to determine whether they point to HTML pages or actual content files. URLs that point to content files are flagged as "citable content." In addition to the fields downloaded from GSC described above, following this brief analysis one more field, citableContent, is added to the data which records whether each URL in the GSC data points to citable content. Possible values for the citableContent field are "Yes" and "No."

    Processed data are then saved in a series of Elasticsearch indices. From January 1, 2017, through August 18, 2018, RAMP stored data in one index per participating IR.

    About Citable Content Downloads

    Data visualizations and aggregations in RAMP dashboards present information about citable content downloads, or CCD. As a measure of use of institutional repository content, CCD represent click activity on IR content that may correspond to research use.

    CCD information is summary data calculated on the fly within the RAMP web application. As noted above, data provided by GSC include whether and how many times a URL was clicked by users. Within RAMP, a "click" is counted as a potential download, so a CCD is calculated as the sum of clicks on pages/URLs that are determined to point to citable content (as defined above).

    For any specified date range, the steps to calculate CCD are:

    Filter data to only include rows where "citableContent" is set to "Yes."
    Sum the value of the "clicks" field on these rows.
    

    Output to CSV

    Published RAMP data are exported from the production Elasticsearch instance and converted to CSV format. The CSV data consist of one "row" for each page or URL from a specific IR which appeared in search result pages (SERP) within Google properties as described above.

    The data in these CSV files include the following fields:

    url: This is returned as a 'page' by the GSC API, and is the URL of the page which was included in an SERP for a Google property.
    impressions: The number of times the URL appears within the SERP.
    clicks: The number of clicks on a URL which took users to a page outside of the SERP.
    clickThrough: Calculated as the number of clicks divided by the number of impressions.
    position: The position of the URL within the SERP.
    country: The country from which the corresponding search originated.
    device: The device used for the search.
    date: The date of the search.
    citableContent: Whether or not the URL points to a content file (ending with pdf, csv, etc.) rather than HTML wrapper pages. Possible values are Yes or No.
    index: The Elasticsearch index corresponding to page click data for a single IR.
    repository_id: This is a human readable alias for the index and identifies the participating repository corresponding to each row. As RAMP has undergone platform and version migrations over time, index names as defined for the index field have not remained consistent. That is, a single participating repository may have multiple corresponding Elasticsearch index names over time. The repository_id is a canonical identifier that has been added to the data to provide an identifier that can be used to reference a single participating repository across all datasets. Filtering and aggregation for individual repositories or groups of repositories should be done using this field.
    

    Filenames for files containing these data follow the format 2018-01_RAMP_all.csv. Using this example, the file 2018-01_RAMP_all.csv contains all data for all RAMP participating IR for the month of January, 2018.

    Data Collection from August 19, 2018 Onward

    RAMP data are downloaded for participating IR from Google Search Console (GSC) via the Search Console API. The data consist of aggregated information about IR pages which appeared in search result pages (SERP) within Google properties (including web search and Google Scholar).

    Data are downloaded in two sets per participating IR. The first set includes page level statistics about URLs pointing to IR pages and content files. The following fields are downloaded for each URL, with one row per URL:

    url: This is returned as a 'page' by the GSC API, and is the URL of the page which was included in an SERP for a Google property.
    impressions: The number of times the URL appears within the SERP.
    clicks: The number of clicks on a URL which took users to a page outside of the SERP.
    clickThrough: Calculated as the number of clicks divided by the number of impressions.
    position: The position of the URL within the SERP.
    date: The date of the search.
    

    Following data processing describe below, on ingest into RAMP a additional field, citableContent, is added to the page level data.

    The second set includes similar information, but instead of being aggregated at the page level, the data are grouped based on the country from which the user submitted the corresponding search, and the type of device used. The following fields are downloaded for combination of country and device, with one row per country/device combination:

    country: The country from which the corresponding search originated.
    device: The device used for the search.
    impressions: The number of times the URL appears within the SERP.
    clicks: The number of clicks on a URL which took users to a page outside of the SERP.
    clickThrough: Calculated as the number of clicks divided by the number of impressions.
    position: The position of the URL within the SERP.
    date: The date of the search.
    

    Note that no personally identifiable information is downloaded by RAMP. Google does not make such information available.

    More information about click-through rates, impressions, and position is available from Google's Search Console API documentation: https://developers.google.com/webmaster-tools/search-console-api-original/v3/searchanalytics/query and https://support.google.com/webmasters/answer/7042828?hl=en

    Data Processing

    Upon download from GSC, the page level data described above are processed to identify URLs that point to citable content. Citable content is defined within RAMP as any URL which points to any type of non-HTML content file (PDF, CSV, etc.). As part of the daily download of page level statistics from Google Search Console (GSC), URLs are analyzed to determine whether they point to HTML pages or actual content files. URLs that point to content files are flagged as "citable content." In addition to the fields downloaded from GSC described above, following this brief analysis one more field, citableContent, is added to the page level data which records whether each page/URL in the GSC data points to citable content. Possible values for the citableContent field are "Yes" and "No."

    The data aggregated by the search country of origin and device type do not include URLs. No additional processing is done on these data. Harvested data are passed directly into Elasticsearch.

    Processed data are then saved in a series of Elasticsearch indices. Currently, RAMP stores data in two indices per participating IR. One index includes the page level data, the second index includes the country of origin and device type data.

    About Citable Content Downloads

    Data visualizations and aggregations in RAMP dashboards present information about citable content downloads, or CCD. As a measure of use of institutional repository

  17. Clickstream Data for Online Shopping

    • kaggle.com
    Updated Apr 13, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bojan Tunguz (2021). Clickstream Data for Online Shopping [Dataset]. https://www.kaggle.com/datasets/tunguz/clickstream-data-for-online-shopping/discussion
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Apr 13, 2021
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Bojan Tunguz
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Source:

    Mariusz Šapczyński, Cracow University of Economics, Poland, lapczynm '@' uek.krakow.pl Sylwester Białowąs, Poznan University of Economics and Business, Poland, sylwester.bialowas '@' ue.poznan.pl

    Data Set Information:

    The dataset contains information on clickstream from online store offering clothing for pregnant women. Data are from five months of 2008 and include, among others, product category, location of the photo on the page, country of origin of the IP address and product price in US dollars.

    Attribute Information:

    The dataset contains 14 variables described in a separate file (See 'Data set description')

    Relevant Papers:

    N/A

    Citation Request:

    If you use this dataset, please cite:

    Šapczyński M., Białowąs S. (2013) Discovering Patterns of Users' Behaviour in an E-shop - Comparison of Consumer Buying Behaviours in Poland and Other European Countries, “Studia Ekonomiczne†, nr 151, “La société de l'information : perspective européenne et globale : les usages et les risques d'Internet pour les citoyens et les consommateurs†, p. 144-153

    Data description ìe-shop clothing 2008î

    Variables:

    1. YEAR (2008)

    ========================================================

    2. MONTH -> from April (4) to August (8)

    ========================================================

    3. DAY -> day number of the month

    ========================================================

    4. ORDER -> sequence of clicks during one session

    ========================================================

    5. COUNTRY -> variable indicating the country of origin of the IP address with the

    following categories:

    1-Australia 2-Austria 3-Belgium 4-British Virgin Islands 5-Cayman Islands 6-Christmas Island 7-Croatia 8-Cyprus 9-Czech Republic 10-Denmark 11-Estonia 12-unidentified 13-Faroe Islands 14-Finland 15-France 16-Germany 17-Greece 18-Hungary 19-Iceland 20-India 21-Ireland 22-Italy 23-Latvia 24-Lithuania 25-Luxembourg 26-Mexico 27-Netherlands 28-Norway 29-Poland 30-Portugal 31-Romania 32-Russia 33-San Marino 34-Slovakia 35-Slovenia 36-Spain 37-Sweden 38-Switzerland 39-Ukraine 40-United Arab Emirates 41-United Kingdom 42-USA 43-biz (.biz) 44-com (.com) 45-int (.int) 46-net (.net) 47-org (*.org)

    ========================================================

    6. SESSION ID -> variable indicating session id (short record)

    ========================================================

    7. PAGE 1 (MAIN CATEGORY) -> concerns the main product category:

    1-trousers 2-skirts 3-blouses 4-sale

    ========================================================

    8. PAGE 2 (CLOTHING MODEL) -> contains information about the code for each product

    (217 products)

    ========================================================

    9. COLOUR -> colour of product

    1-beige 2-black 3-blue 4-brown 5-burgundy 6-gray 7-green 8-navy blue 9-of many colors 10-olive 11-pink 12-red 13-violet 14-white

    ========================================================

    10. LOCATION -> photo location on the page, the screen has been divided into six parts:

    1-top left 2-top in the middle 3-top right 4-bottom left 5-bottom in the middle 6-bottom right

    ========================================================

    11. MODEL PHOTOGRAPHY -> variable with two categories:

    1-en face 2-profile

    ========================================================

    12. PRICE -> price in US dollars

    ========================================================

    13. PRICE 2 -> variable informing whether the price of a particular product is higher than

    the average price for the entire product category

    1-yes 2-no

    ========================================================

    14. PAGE -> page number within the e-store website (from 1 to 5)

    ++++++++++++++++++++++++++++++++++++++++++++++++++++++++

  18. n

    Repository Analytics and Metrics Portal (RAMP) 2017 data

    • data.niaid.nih.gov
    • datadryad.org
    • +1more
    zip
    Updated Jul 27, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jonathan Wheeler; Kenning Arlitsch (2021). Repository Analytics and Metrics Portal (RAMP) 2017 data [Dataset]. http://doi.org/10.5061/dryad.r7sqv9scf
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jul 27, 2021
    Dataset provided by
    Montana State University
    University of New Mexico
    Authors
    Jonathan Wheeler; Kenning Arlitsch
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    The Repository Analytics and Metrics Portal (RAMP) is a web service that aggregates use and performance use data of institutional repositories. The data are a subset of data from RAMP, the Repository Analytics and Metrics Portal (http://rampanalytics.org), consisting of data from all participating repositories for the calendar year 2017. For a description of the data collection, processing, and output methods, please see the "methods" section below.

    Methods RAMP Data Documentation – January 1, 2017 through August 18, 2018

    Data Collection

    RAMP data are downloaded for participating IR from Google Search Console (GSC) via the Search Console API. The data consist of aggregated information about IR pages which appeared in search result pages (SERP) within Google properties (including web search and Google Scholar).

    Data from January 1, 2017 through August 18, 2018 were downloaded in one dataset per participating IR. The following fields were downloaded for each URL, with one row per URL:

    url: This is returned as a 'page' by the GSC API, and is the URL of the page which was included in an SERP for a Google property.
    impressions: The number of times the URL appears within the SERP.
    clicks: The number of clicks on a URL which took users to a page outside of the SERP.
    clickThrough: Calculated as the number of clicks divided by the number of impressions.
    position: The position of the URL within the SERP.
    country: The country from which the corresponding search originated.
    device: The device used for the search.
    date: The date of the search.
    

    Following data processing describe below, on ingest into RAMP an additional field, citableContent, is added to the page level data.

    Note that no personally identifiable information is downloaded by RAMP. Google does not make such information available.

    More information about click-through rates, impressions, and position is available from Google's Search Console API documentation: https://developers.google.com/webmaster-tools/search-console-api-original/v3/searchanalytics/query and https://support.google.com/webmasters/answer/7042828?hl=en

    Data Processing

    Upon download from GSC, data are processed to identify URLs that point to citable content. Citable content is defined within RAMP as any URL which points to any type of non-HTML content file (PDF, CSV, etc.). As part of the daily download of statistics from Google Search Console (GSC), URLs are analyzed to determine whether they point to HTML pages or actual content files. URLs that point to content files are flagged as "citable content." In addition to the fields downloaded from GSC described above, following this brief analysis one more field, citableContent, is added to the data which records whether each URL in the GSC data points to citable content. Possible values for the citableContent field are "Yes" and "No."

    Processed data are then saved in a series of Elasticsearch indices. From January 1, 2017, through August 18, 2018, RAMP stored data in one index per participating IR.

    About Citable Content Downloads

    Data visualizations and aggregations in RAMP dashboards present information about citable content downloads, or CCD. As a measure of use of institutional repository content, CCD represent click activity on IR content that may correspond to research use.

    CCD information is summary data calculated on the fly within the RAMP web application. As noted above, data provided by GSC include whether and how many times a URL was clicked by users. Within RAMP, a "click" is counted as a potential download, so a CCD is calculated as the sum of clicks on pages/URLs that are determined to point to citable content (as defined above).

    For any specified date range, the steps to calculate CCD are:

    Filter data to only include rows where "citableContent" is set to "Yes."
    Sum the value of the "clicks" field on these rows.
    

    Output to CSV

    Published RAMP data are exported from the production Elasticsearch instance and converted to CSV format. The CSV data consist of one "row" for each page or URL from a specific IR which appeared in search result pages (SERP) within Google properties as described above.

    The data in these CSV files include the following fields:

    url: This is returned as a 'page' by the GSC API, and is the URL of the page which was included in an SERP for a Google property.
    impressions: The number of times the URL appears within the SERP.
    clicks: The number of clicks on a URL which took users to a page outside of the SERP.
    clickThrough: Calculated as the number of clicks divided by the number of impressions.
    position: The position of the URL within the SERP.
    country: The country from which the corresponding search originated.
    device: The device used for the search.
    date: The date of the search.
    citableContent: Whether or not the URL points to a content file (ending with pdf, csv, etc.) rather than HTML wrapper pages. Possible values are Yes or No.
    index: The Elasticsearch index corresponding to page click data for a single IR.
    repository_id: This is a human readable alias for the index and identifies the participating repository corresponding to each row. As RAMP has undergone platform and version migrations over time, index names as defined for the index field have not remained consistent. That is, a single participating repository may have multiple corresponding Elasticsearch index names over time. The repository_id is a canonical identifier that has been added to the data to provide an identifier that can be used to reference a single participating repository across all datasets. Filtering and aggregation for individual repositories or groups of repositories should be done using this field.
    

    Filenames for files containing these data follow the format 2017-01_RAMP_all.csv. Using this example, the file 2017-01_RAMP_all.csv contains all data for all RAMP participating IR for the month of January, 2017.

    References

    Google, Inc. (2021). Search Console APIs. Retrieved from https://developers.google.com/webmaster-tools/search-console-api-original.

  19. n

    FOI-02163 - Datasets - Open Data Portal

    • opendata.nhsbsa.net
    Updated Sep 16, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). FOI-02163 - Datasets - Open Data Portal [Dataset]. https://opendata.nhsbsa.net/dataset/foi-02163
    Explore at:
    Dataset updated
    Sep 16, 2024
    Description

    a - it is not fair to disclose individual’s personal details to the world and is likely to cause damage or distress. b - these details are not of sufficient interest to the public to warrant an intrusion into the privacy of the individual. Please click the below web link to see the exemption in full. www.legislation.gov.uk/ukpga/2000/36/section/40 Breach of confidentiality Please note that the identification of individuals is also a breach of the common law duty of confidence. An individual who has been identified could make a claim against the NHSBSA for the disclosure of the confidential information. The information requested is therefore being withheld as it falls under the exemption in section 41(1) ‘Information provided in confidence’ of the Freedom of Information Act. Please click the below web link to see the exemption in full.

  20. P

    🔐 Trend Micro Sign In Account – Secure Your Digital Life Dataset

    • paperswithcode.com
    Updated Jun 17, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). 🔐 Trend Micro Sign In Account – Secure Your Digital Life Dataset [Dataset]. https://paperswithcode.com/dataset/trend-micro-sign-in-account-secure-your
    Explore at:
    Dataset updated
    Jun 17, 2025
    Description

    Toll Free Number: +1-341-900-3252 📞

    In today’s digital age, Toll Free Number: +1-341-900-3252 📞 keeping your devices and data safe is a top priority. Trend Micro offers advanced cybersecurity solutions, and accessing your Trend Micro sign in account Toll Free Number: +1-341-900-3252 📞 is the first step toward complete protection. Toll Free Number: +1-341-900-3252 📞 Whether you're managing your subscription, updating your settings, or checking your device security, logging into your Trend Micro account gives you full control.

    If you need help at any time, don’t hesitate to contact Trend Micro Support (Toll Free) Number: +1-341-900-3252 📞 for quick assistance.

    🔍 How to Sign in to Your Trend Micro Account Follow these steps to easily access your Trend Micro sign in account:

    Visit the official Trend Micro login page.

    Enter your registered email address and password.

    Click on “Sign In” to access your account dashboard.

    From your account, you can manage your subscriptions, renew licenses, and download protection software.

    💡 Tip: Forgot your password? Click “Forgot password?” on the sign-in page to reset it.

    🛡️ Benefits of a Trend Micro Sign In Account Creating and using your Trend Micro sign in account gives you access to several powerful features:

    📲 Manage device protection from one central place

    🔁 Renew or upgrade your plan with ease

    👨‍💻 Monitor and adjust your security settings

    📬 Get notifications and security updates

    📞 Access 24/7 customer support at +1-341-900-3252

    ☎️ Need Help? Call Support at +1-341-900-3252 (Toll Free) Having trouble signing in? Whether it's login errors, forgotten passwords, or account recovery, help is just a call away! Reach out to Trend Micro customer support at +1-341-900-3252 for expert guidance.

    🤔 Frequently Asked Questions (FAQs) ❓ How do I recover my Trend Micro sign in account? If you’ve forgotten your password or can’t log in:

    Go to the login page and click “Forgot password?”

    Follow the steps sent to your registered email

    Still need help? Call support at +1-341-900-3252

    ❓ Can I use my Trend Micro account on multiple devices? Yes! With your Trend Micro sign in account, you can manage multiple devices under one subscription—depending on your plan.

    ❓ What if I can't access the email I used to register? Contact Trend Micro Support at +1-341-900-3252 to recover your account or change your login email securely.

    ❓ Is the Trend Micro login page secure? Absolutely. Trend Micro uses encrypted connections and advanced security to keep your login credentials safe.

    ✅ Final Thoughts Your Trend Micro sign in account is your control center for digital security. From updating protection plans to receiving instant security alerts, it all starts with a secure login. If you ever face issues or need assistance, don't hesitate to call the official Trend Micro Toll Free Number: +1-341-900-3252 📞 — available 24/7 to help keep you protected. 🔐💻

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Ciobanu Marius (2024). 📣 Ad Click Prediction Dataset [Dataset]. https://www.kaggle.com/datasets/marius2303/ad-click-prediction-dataset
Organization logo

📣 Ad Click Prediction Dataset

Predict whether a user will click on an ad, challenging clean data request

Explore at:
CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
Dataset updated
Sep 7, 2024
Dataset provided by
Kagglehttp://kaggle.com/
Authors
Ciobanu Marius
License

Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically

Description

About

This dataset provides insights into user behavior and online advertising, specifically focusing on predicting whether a user will click on an online advertisement. It contains user demographic information, browsing habits, and details related to the display of the advertisement. This dataset is ideal for building binary classification models to predict user interactions with online ads.

Features

  • id: Unique identifier for each user.
  • full_name: User's name formatted as "UserX" for anonymity.
  • age: Age of the user (ranging from 18 to 64 years).
  • gender: The gender of the user (categorized as Male, Female, or Non-Binary).
  • device_type: The type of device used by the user when viewing the ad (Mobile, Desktop, Tablet).
  • ad_position: The position of the ad on the webpage (Top, Side, Bottom).
  • browsing_history: The user's browsing activity prior to seeing the ad (Shopping, News, Entertainment, Education, Social Media).
  • time_of_day: The time when the user viewed the ad (Morning, Afternoon, Evening, Night).
  • click: The target label indicating whether the user clicked on the ad (1 for a click, 0 for no click).

Goal

The objective of this dataset is to predict whether a user will click on an online ad based on their demographics, browsing behavior, the context of the ad's display, and the time of day. You will need to clean the data, understand it and then apply machine learning models to predict and evaluate data. It is a really challenging request for this kind of data. This data can be used to improve ad targeting strategies, optimize ad placement, and better understand user interaction with online advertisements.

Search
Clear search
Close search
Google apps
Main menu