100+ datasets found
  1. Concerns over the protection of personal data by websites in Sweden 2018

    • statista.com
    Updated Jul 11, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2025). Concerns over the protection of personal data by websites in Sweden 2018 [Dataset]. https://www.statista.com/statistics/498171/concerns-over-the-protection-of-personal-data-by-websites-in-sweden/
    Explore at:
    Dataset updated
    Jul 11, 2025
    Dataset authored and provided by
    Statistahttp://statista.com/
    Time period covered
    Oct 2019
    Area covered
    Sweden
    Description

    The majority of the Swedes who took part in a survey conducted on 2019, stated they were concerned that their online information was not kept secure by websites (** percent). ** percent of the respondents disagreed with that statement.

  2. Amount of data created, consumed, and stored 2010-2023, with forecasts to...

    • statista.com
    Updated Jun 30, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2025). Amount of data created, consumed, and stored 2010-2023, with forecasts to 2028 [Dataset]. https://www.statista.com/statistics/871513/worldwide-data-created/
    Explore at:
    Dataset updated
    Jun 30, 2025
    Dataset authored and provided by
    Statistahttp://statista.com/
    Time period covered
    May 2024
    Area covered
    Worldwide
    Description

    The total amount of data created, captured, copied, and consumed globally is forecast to increase rapidly, reaching *** zettabytes in 2024. Over the next five years up to 2028, global data creation is projected to grow to more than *** zettabytes. In 2020, the amount of data created and replicated reached a new high. The growth was higher than previously expected, caused by the increased demand due to the COVID-19 pandemic, as more people worked and learned from home and used home entertainment options more often. Storage capacity also growing Only a small percentage of this newly created data is kept though, as just * percent of the data produced and consumed in 2020 was saved and retained into 2021. In line with the strong growth of the data volume, the installed base of storage capacity is forecast to increase, growing at a compound annual growth rate of **** percent over the forecast period from 2020 to 2025. In 2020, the installed base of storage capacity reached *** zettabytes.

  3. d

    LiDAR Surveys over Selected Forest Research Sites, Brazilian Amazon,...

    • catalog.data.gov
    • s.cnmilf.com
    • +3more
    Updated Jun 28, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    ORNL_DAAC (2025). LiDAR Surveys over Selected Forest Research Sites, Brazilian Amazon, 2008-2018 [Dataset]. https://catalog.data.gov/dataset/lidar-surveys-over-selected-forest-research-sites-brazilian-amazon-2008-2018-38601
    Explore at:
    Dataset updated
    Jun 28, 2025
    Dataset provided by
    ORNL_DAAC
    Area covered
    Amazon Rainforest, Brazil
    Description

    This dataset provides the complete catalog of point cloud data collected during LiDAR surveys over selected forest research sites across the Amazon rainforest in Brazil between 2008 and 2018 for the Sustainable Landscapes Brazil Project. Flight lines were selected to overfly key field research sites in the Brazilian states of Acre, Amazonas, Bahia, Goias, Mato Grosso, Para, Rondonia, Santa Catarina, and Sao Paulo. The point clouds have been georeferenced, noise-filtered, and corrected for misalignment of overlapping flight lines. They are provided in 1 km2 tiles. The data were collected to measure forest canopy structure across Amazonian landscapes to monitor the effects of selective logging on forest biomass and carbon balance, and forest recovery over time.

  4. Chesapeake Bay Nitrogen Trend Predictor Dataset

    • catalog.data.gov
    • s.cnmilf.com
    Updated Jan 8, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. EPA Office of Research and Development (ORD) (2023). Chesapeake Bay Nitrogen Trend Predictor Dataset [Dataset]. https://catalog.data.gov/dataset/chesapeake-bay-nitrogen-trend-predictor-dataset
    Explore at:
    Dataset updated
    Jan 8, 2023
    Dataset provided by
    United States Environmental Protection Agencyhttp://www.epa.gov/
    Area covered
    Chesapeake Bay
    Description

    Please review Zhang et al. (2021) for details on study design and datasets (https://doi.org/10.1016/j.watres.2022.118443). In summary, predictor and response variable data was acquired from the Chesapeake Bay Program and USGS. This data was subjected to a trend analysis to estimate the MK linear slope change for both predictor and response variables. After running a cluster analysis on the scaled TN loading time series (the response variable), the cluster assignment was paired with the slope estimates from the suite of predictor variables tied to the nutrient inventory and static geologic and land use variables. From there, an RF analysis was executed to link trends in anthropogenic driver and other contextual environmental factors to the identified trend cluster types. After calibrating the RF model, likelihood of improving, relatively static, or degrading catchments across the Chesapeake Bay were identified for the 2007 to 2018 period. Tabular data is available on the journal website and PUBMED, and the predictor/response variable data can be downloaded individually in the USGS and Chesapeake Bay Program links listed in the data access section. Portions of this dataset are inaccessible because: This data was generate by other federal entities and are housed in their respective data warehouse domains (e.g., USGS and Chesapeake Bay Program). Furthermore, the data can be accessed on the journal website as well as NCBI PUBMED (https://pubmed.ncbi.nlm.nih.gov/35461100/). They can be accessed through the following means: Combined dataset can be accessed on the journal website (https://www.sciencedirect.com/science/article/pii/S0043135422003979?via%3Dihub#ack0001) and will soon be available on NCBI (https://pubmed.ncbi.nlm.nih.gov/35461100/). The predictor variable data can be accessed from the Chesapeake Bay Program (https://cast.chesapeakebay.net/) and USGS (https://pubs.er.usgs.gov/publication/ds948 and https://www.sciencebase.gov/catalog/item/5669a79ee4b08895842a1d47). Format: Please review Zhang et al. (2021) for details on study design and datasets (https://doi.org/10.1016/j.watres.2022.118443). In summary, predictor and response variable data was acquired from the Chesapeake Bay Program and USGS. This data was subjected to a trend analysis to estimate the MK linear slope change for both predictor and response variables. After running a cluster analysis on the scaled TN loading time series (the response variable), the cluster assignment was paired with the slope estimates from the suite of predictor variables tied to the nutrient inventory and static geologic and land use variables. From there, an RF analysis was executed to link trends in anthropogenic driver and other contextual environmental factors to the identified trend cluster types. After calibrating the RF model, likelihood of improving, relatively static, or degrading catchments across the Chesapeake Bay were identified for the 2007 to 2018 period. Tabular data is available on the journal website and PUBMED, and the predictor/response variable data can be downloaded individually in the USGS and Chesapeake Bay Program links listed in the data access section. This dataset is associated with the following publication: Zhang, Q., J. Bostic, and R. Sabo. Regional patterns and drivers of total nitrogen trends in the Chesapeake Bay watershed: Insights from machine learning approaches and management implications. WATER RESEARCH. Elsevier Science Ltd, New York, NY, USA, 218: 1-15, (2022).

  5. d

    2018 SFSP & SSO Approved Sites

    • catalog.data.gov
    • datasets.ai
    • +1more
    Updated Sep 25, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    data.austintexas.gov (2023). 2018 SFSP & SSO Approved Sites [Dataset]. https://catalog.data.gov/dataset/2018-approved-summer-meal-program-sites
    Explore at:
    Dataset updated
    Sep 25, 2023
    Dataset provided by
    data.austintexas.gov
    Description

    Count of sites approved to operate Summer Food Service Program (SFSP) or Seamless Summer Option (SSO) for 2018. May include sites that have cancelled participation after initial approval. Site count is based on Site Start Date.

  6. d

    Data from: Water level data for four sites in the coastal marsh at Grand Bay...

    • catalog.data.gov
    • data.usgs.gov
    • +1more
    Updated Jul 6, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). Water level data for four sites in the coastal marsh at Grand Bay National Estuarine Research Reserve, Mississippi, from October 2018 through January 2020 [Dataset]. https://catalog.data.gov/dataset/water-level-data-for-four-sites-in-the-coastal-marsh-at-grand-bay-national-estuarine-resea
    Explore at:
    Dataset updated
    Jul 6, 2024
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Description

    To better understand sediment deposition in marsh environments, scientists from the U.S. Geological Survey, St. Petersburg Coastal and Marine Science Center (USGS-SPCMSC) selected four study sites (Sites 5, 6, 7, and 8) along the Point Aux Chenes Bay shoreline of the Grand Bay National Estuarine Research Reserve (GNDNERR), Mississippi. These datasets were collected to serve as baseline data prior to the installation of a living shoreline (a subtidal sill). Each site consisted of five plots located along a transect perpendicular to the marsh-estuary shoreline at 5-meter (m) increments (5, 10, 15, 20, and 25 m from the shoreline). Each plot contained six net sedimentation tiles (NST) that were secured flush to the marsh surface using polyvinyl chloride (PVC) pipe. NST are an inexpensive and simple tool to assess short- and long-term deposition that can be deployed in highly dynamic environments without the compaction associated with traditional coring methods. The NST were deployed for three month sampling periods, measuring sediment deposition from July 2018 to January 2020, with one set of NST being deployed for six months. Sediment deposited on the NST were processed to determine physical characteristics, such as deposition thickness, volume, wet weight/dry weight, grain size, and organic content (loss-on-ignition [LOI]). For select sampling periods, ancillary data (water level, elevation, and wave data) are also provided in this data release. Data were collected during USGS Field Activities Numbers (FAN) 2018-332-FA (18CCT01), 2018-358-FA (18CCT10), 2019-303-FA (19CCT01, 19CCT02, 19CCT03, and 19CCT04, respectively), and 2020-301-FA (20CCT01). Additional survey and data details are available from the U.S. Geological Survey Coastal and Marine Geoscience Data System (CMGDS) at, https://cmgds.marine.usgs.gov/. Data collected between 2016 and 2017 from a related NST study in the GNDNERR (Middle Bay and North Rigolets) can be found at https://doi.org/10.5066/P9BFR2US. Please read the full metadata for details on data collection, dataset variables, and data quality.

  7. d

    Data from Decadal Change in Groundwater Quality Web Site, 1988-2018

    • catalog.data.gov
    • data.usgs.gov
    • +1more
    Updated Jul 6, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). Data from Decadal Change in Groundwater Quality Web Site, 1988-2018 [Dataset]. https://catalog.data.gov/dataset/data-from-decadal-change-in-groundwater-quality-web-site-1988-2018
    Explore at:
    Dataset updated
    Jul 6, 2024
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Description

    Evaluating Decadal Changes in Groundwater Quality: Groundwater-quality data were collected from 5,000 wells between 1988-2001 (first decadal sampling event) by the National Water-Quality Assessment Project. Samples are collected in groups of 20-30 wells with similar characteristics called networks. About 1,500 of these wells in 67 networks were sampled again approximately 10 years later between 2002-2012 (second sampling event) to evaluate decadal changes in groundwater quality. Between 2012 and 2018 (third sampling event), a subset of these networks was sampled again, allowing additional results to be displayed on the web page: Decadal changes in groundwater quality (https://nawqatrends.wim.usgs.gov/decadal/). This is the fourth iteration of data added to the website. With the additional data, it is possible to evaluate changes in water quality between the 2nd and 3rd sampling events for 21 additional networks (56 total), changes in water quality between the 1st and 3rd sampling events for 18 additional networks (45 total), and changes across all 3 sampling events for 18 additional networks (45 total). A total of 83 networks have been sampled at least twice. Samples were obtained from monitoring wells, domestic-supply wells, and some public-supply wells before any treatment on the system. Groundwater samples used to evaluate decadal change were collected from networks of wells with similar characteristics. Some networks, consisting of domestic or public supply wells, were used to assess changes in the quality of groundwater used for drinking water supply. Other networks, consisting of monitoring wells, assessed changes in the quality of shallow groundwater underlying key land-use types such as agricultural or urban lands. Networks were chosen based on geographic distribution across the Nation and to represent the most important water-supply aquifers and specific land-use types. Decadal changes in concentrations of nutrients, metals, and pesticides and other organic contaminants in groundwater were evaluated in a total of 83 networks across the Nation by comparing changes between selected sampling events. Decadal changes in median concentrations for a network are classified as large, small, or no change in comparison to a benchmark concentration. For example, a large change in chloride concentrations indicates that the median of all differences in concentrations in a network is greater than 5 percent of the chloride benchmark per decade. For chloride, which has a Secondary Maximum Contaminant level of 250 milligrams per liter, this would mean the change in concentration exceeded 12.5 milligrams per liter (mg/L), or 5 percent of the benchmark. 230 networks were sampled from 1988 to 2001 to assess the status of the Nation's groundwater quality. Each dot on the map on the "Learn more" tab of the Decadal mapper website, (https://nawqatrends.wim.usgs.gov/decadal/) represents the center point of a network of about 20 to 30 wells. Networks sampled in the first sampling event only are shown in green. There were 67 networks resampled from 2002 to 2012 to assess decadal changes in groundwater quality. Networks sampled from 2012 to 2018 and at least one previous sampling event are shown in orange and trend networks that have not yet been resampled in the third decadal sampling event are shown in blue. Networks sampled in the first and second sampling events but are no longer being sampled are shown in gray.

  8. g

    Data from: NDVI, Species Cover, and LAI, Burned and Unburned sites, Interior...

    • gimi9.com
    • s.cnmilf.com
    • +6more
    Updated Oct 28, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2021). NDVI, Species Cover, and LAI, Burned and Unburned sites, Interior Alaska, 2017-2018 [Dataset]. https://gimi9.com/dataset/data-gov_ndvi-species-cover-and-lai-burned-and-unburned-sites-interior-alaska-2017-2018-51151
    Explore at:
    Dataset updated
    Oct 28, 2021
    Area covered
    Interior Alaska, Alaska
    Description

    This dataset provides leaf area index (LAI), tree species and canopy cover, normalized difference vegetation index (NDVI), and NDVI trends for boreal forests in interior Alaska, U.S. These data were collected to investigate NDVI trends with forest structure and composition as influenced by disturbance and succession. The data are from 102 sites surveyed in 2017 and 2018 and include locations with and without a fire since 1940. A time series of NDVI was developed from Landsat (1999-2018) to measure NDVI trends. The field data cover the period 2017-08-29 to 2018-08-20. The surveyed forest stands spanned a distance of over 425 km across interior Alaska. The sites were selected before visiting the field to include locations with and without a fire since 1940. Recently burned sites were selected to span a range of years since fire, while sites without a recent fire were selected to include a range of Landsat NDVI trends. For each year, the median NDVI during the growing season was calculated. Then, a simple linear regression trend was calculated for years 1999-2018.

  9. i

    Alphabay marketplace: Anonymized dataset

    • impactcybertrust.org
    Updated Dec 31, 2014
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Carnegie Mellon University (2014). Alphabay marketplace: Anonymized dataset [Dataset]. http://doi.org/10.23721/116/1462164
    Explore at:
    Dataset updated
    Dec 31, 2014
    Authors
    Carnegie Mellon University
    Time period covered
    Dec 31, 2014 - May 26, 2017
    Description

    "Anonymized database pertaining to the AlphaBay marketplace. This data was used in the papers ""Plug and Prey? Measuring the Commoditization of Cybercrime via Online Anonymous Markets"" (Van Wegberg et al., 2018), ""An Empirical Analysis of Traceability in the Monero Blockchain"" (Moeser et al., 2018) and in the joint EMCDDA/EUROPOL report ""Drugs and thedarknet: Perspectives for enforcement, researchand policy"" (EMCDDA, 2017). In this dataset, we chose not to make available any textual information (item name, description, or feedback text). We also anonymized all handles (user id, item id). This represents more than two and a half years of parsed data from what was arguably the largest online anonymous marketplace ever.

    EMCDDA (2017) Drugs and thedarknet: Perspectives for enforcement, researchand policy. November 2017.
    Van Wegberg et al.. Plug and Prey? Measuring the Commoditization of Cybercrime via Online Anonymous Markets. To appear in Proceedings of the 27th USENIX Security Symposium (USENIX Security'18). Baltimore, MD. August 2018.
    Moeser et al. An Empirical Analysis of Traceability in the Monero Blockchain. To appear in Proceedings of the Privacy Enhancing Technology Symposium (PETS 2018), volume 3. Barcelona, Spain. July 2018."

  10. Context Ad Clicks Dataset

    • kaggle.com
    Updated Feb 9, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Möbius (2021). Context Ad Clicks Dataset [Dataset]. https://www.kaggle.com/arashnic/ctrtest/code
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Feb 9, 2021
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Möbius
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Context

    The dataset generated by an E-commerce website which sells a variety of products at its online platform. The records user behaviour of its customers and stores it as a log. However, most of the times, users do not buy the products instantly and there is a time gap during which the customer might surf the internet and maybe visit competitor websites. Now, to improve sales of products, website owner has hired an Adtech company which built a system such that ads are being shown for owner products on its partner websites. If a user comes to owner website and searches for a product, and then visits these partner websites or apps, his/her previously viewed items or their similar items are shown on as an ad. If the user clicks this ad, he/she will be redirected to the owner website and might buy the product.

    The task is to predict the probability i.e. probability of user clicking the ad which is shown to them on the partner websites for the next 7 days on the basis of historical view log data, ad impression data and user data.

    Content

    You are provided with the view log of users (2018/10/15 - 2018/12/11) and the product description collected from the owner website. We also provide the training data and test data containing details for ad impressions at the partner websites(Train + Test). Train data contains the impression logs during 2018/11/15 – 2018/12/13 along with the label which specifies whether the ad is clicked or not. Your model will be evaluated on the test data which have impression logs during 2018/12/12 – 2018/12/18 without the labels. You are provided with the following files:

    • train.zip: This contains 3 files and description of each is given below:
    • train.csv
    • view_log.csv
    • item_data.csv

      • test.csv: test file contains the impressions for which the participants need to predict the click rate sample_submission.csv: This file contains the format in which you have to submit your predictions.

    Inspiration

    • Predict the probability probability of user clicking the ad which is shown to them on the partner websites for the next 7 days on the basis of historical view log data, ad impression data and user data.

    The evaluated metric could be "area under the ROC curve" between the predicted probability and the observed target.

  11. A web tracking data set of online browsing behavior of 2,148 users

    • zenodo.org
    • data.niaid.nih.gov
    application/gzip, txt +1
    Updated May 14, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Juhi Kulshrestha; Juhi Kulshrestha; Marcos Oliveira; Marcos Oliveira; Orkut Karacalik; Denis Bonnay; Claudia Wagner; Orkut Karacalik; Denis Bonnay; Claudia Wagner (2021). A web tracking data set of online browsing behavior of 2,148 users [Dataset]. http://doi.org/10.5281/zenodo.4757574
    Explore at:
    zip, txt, application/gzipAvailable download formats
    Dataset updated
    May 14, 2021
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Juhi Kulshrestha; Juhi Kulshrestha; Marcos Oliveira; Marcos Oliveira; Orkut Karacalik; Denis Bonnay; Claudia Wagner; Orkut Karacalik; Denis Bonnay; Claudia Wagner
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    This anonymized data set consists of one month's (October 2018) web tracking data of 2,148 German users. For each user, the data contains the anonymized URL of the webpage the user visited, the domain of the webpage, category of the domain, which provides 41 distinct categories. In total, these 2,148 users made 9,151,243 URL visits, spanning 49,918 unique domains. For each user in our data set, we have self-reported information (collected via a survey) about their gender and age.

    We acknowledge the support of Respondi AG, which provided the web tracking and survey data free of charge for research purposes, with special thanks to François Erner and Luc Kalaora at Respondi for their insights and help with data extraction.

    The data set is analyzed in the following paper:

    • Kulshrestha, J., Oliveira, M., Karacalik, O., Bonnay, D., Wagner, C. "Web Routineness and Limits of Predictability: Investigating Demographic and Behavioral Differences Using Web Tracking Data." Proceedings of the International AAAI Conference on Web and Social Media. 2021. https://arxiv.org/abs/2012.15112.

    The code used to analyze the data is also available at https://github.com/gesiscss/web_tracking.

    If you use data or code from this repository, please cite the paper above and the Zenodo link.

  12. A

    ‘2018 NYC Open Data Plan: FOIL Datasets’ analyzed by Analyst-2

    • analyst-2.ai
    Updated Jan 26, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Analyst-2 (analyst-2.ai) / Inspirient GmbH (inspirient.com) (2022). ‘2018 NYC Open Data Plan: FOIL Datasets’ analyzed by Analyst-2 [Dataset]. https://analyst-2.ai/analysis/data-gov-2018-nyc-open-data-plan-foil-datasets-9c09/1da368a9/?iid=000-489&v=presentation
    Explore at:
    Dataset updated
    Jan 26, 2022
    Dataset authored and provided by
    Analyst-2 (analyst-2.ai) / Inspirient GmbH (inspirient.com)
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Analysis of ‘2018 NYC Open Data Plan: FOIL Datasets’ provided by Analyst-2 (analyst-2.ai), based on source dataset retrieved from https://catalog.data.gov/dataset/2745ca6a-eab5-42b8-91f6-5eeebc86a74b on 26 January 2022.

    --- Dataset description provided by original source is as follows ---

    Local Law 7 of 2016 requires agencies to “review responses to freedom of information law [FOIL] requests that include the release of data to determine if such responses consist of or include public data sets that have not yet been included on the single web portal or the inclusion” on the Open Data Portal. Additionally, each City agency shall disclose “the total number, since the last update, of such agency’s freedom of information law responses that included the release of data, the total number of such responses determined to consist of or include a public data set that had not yet been included on the single web portal and the name of such public data set, where applicable, and the total number of such responses that resulted in voluntarily disclosed information being made accessible through the single web portal.”

    See the agency summary statistics on data released in responses to FOIL requests here: https://data.cityofnewyork.us/City-Government/2018- Open-Data-Plan-FOIL-Report/cvse-perd

    See the 2018 Open Data for All Report and Open Data Plan here: https://opendata.cityofnewyork.us/wp-content/uploads/2018/09/2018-NYC-OD4A-report.pdf

    --- Original source retains full ownership of the source dataset ---

  13. Spain Number of Mortgages: Urban Areas: Land Sites

    • ceicdata.com
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    CEICdata.com (2019). Spain Number of Mortgages: Urban Areas: Land Sites [Dataset]. https://www.ceicdata.com/en/spain/mortgage-statistics/number-of-mortgages-urban-areas-land-sites
    Explore at:
    Dataset provided by
    CEIC Data
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    May 1, 2017 - Apr 1, 2018
    Area covered
    Spain
    Variables measured
    Loans
    Description

    Spain Number of Mortgages: Urban Areas: Land Sites data was reported at 590.000 Unit in May 2018. This records an increase from the previous number of 510.000 Unit for Apr 2018. Spain Number of Mortgages: Urban Areas: Land Sites data is updated monthly, averaging 2,566.000 Unit from Jan 2003 (Median) to May 2018, with 185 observations. The data reached an all-time high of 6,905.000 Unit in May 2007 and a record low of 434.000 Unit in Dec 2016. Spain Number of Mortgages: Urban Areas: Land Sites data remains active status in CEIC and is reported by National Statistics Institute. The data is categorized under Global Database’s Spain – Table ES.EB012: Mortgage Statistics.

  14. FSDKaggle2018

    • zenodo.org
    • opendatalab.com
    • +2more
    zip
    Updated Jan 24, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Eduardo Fonseca; Eduardo Fonseca; Xavier Favory; Jordi Pons; Frederic Font; Frederic Font; Manoj Plakal; Daniel P. W. Ellis; Daniel P. W. Ellis; Xavier Serra; Xavier Serra; Xavier Favory; Jordi Pons; Manoj Plakal (2020). FSDKaggle2018 [Dataset]. http://doi.org/10.5281/zenodo.2552860
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Eduardo Fonseca; Eduardo Fonseca; Xavier Favory; Jordi Pons; Frederic Font; Frederic Font; Manoj Plakal; Daniel P. W. Ellis; Daniel P. W. Ellis; Xavier Serra; Xavier Serra; Xavier Favory; Jordi Pons; Manoj Plakal
    Description

    FSDKaggle2018 is an audio dataset containing 11,073 audio files annotated with 41 labels of the AudioSet Ontology. FSDKaggle2018 has been used for the DCASE Challenge 2018 Task 2, which was run as a Kaggle competition titled Freesound General-Purpose Audio Tagging Challenge.

    Citation

    If you use the FSDKaggle2018 dataset or part of it, please cite our DCASE 2018 paper:

    Eduardo Fonseca, Manoj Plakal, Frederic Font, Daniel P. W. Ellis, Xavier Favory, Jordi Pons, Xavier Serra. "General-purpose Tagging of Freesound Audio with AudioSet Labels: Task Description, Dataset, and Baseline". Proceedings of the DCASE 2018 Workshop (2018)

    You can also consider citing our ISMIR 2017 paper, which describes how we gathered the manual annotations included in FSDKaggle2018.

    Eduardo Fonseca, Jordi Pons, Xavier Favory, Frederic Font, Dmitry Bogdanov, Andres Ferraro, Sergio Oramas, Alastair Porter, and Xavier Serra, "Freesound Datasets: A Platform for the Creation of Open Audio Datasets", In Proceedings of the 18th International Society for Music Information Retrieval Conference, Suzhou, China, 2017

    Contact

    You are welcome to contact Eduardo Fonseca should you have any questions at eduardo.fonseca@upf.edu.

    About this dataset

    Freesound Dataset Kaggle 2018 (or FSDKaggle2018 for short) is an audio dataset containing 11,073 audio files annotated with 41 labels of the AudioSet Ontology [1]. FSDKaggle2018 has been used for the Task 2 of the Detection and Classification of Acoustic Scenes and Events (DCASE) Challenge 2018. Please visit the DCASE2018 Challenge Task 2 website for more information. This Task was hosted on the Kaggle platform as a competition titled Freesound General-Purpose Audio Tagging Challenge. It was organized by researchers from the Music Technology Group of Universitat Pompeu Fabra, and from Google Research’s Machine Perception Team.

    The goal of this competition was to build an audio tagging system that can categorize an audio clip as belonging to one of a set of 41 diverse categories drawn from the AudioSet Ontology.

    All audio samples in this dataset are gathered from Freesound [2] and are provided here as uncompressed PCM 16 bit, 44.1 kHz, mono audio files. Note that because Freesound content is collaboratively contributed, recording quality and techniques can vary widely.

    The ground truth data provided in this dataset has been obtained after a data labeling process which is described below in the Data labeling process section. FSDKaggle2018 clips are unequally distributed in the following 41 categories of the AudioSet Ontology:

    "Acoustic_guitar", "Applause", "Bark", "Bass_drum", "Burping_or_eructation", "Bus", "Cello", "Chime", "Clarinet", "Computer_keyboard", "Cough", "Cowbell", "Double_bass", "Drawer_open_or_close", "Electric_piano", "Fart", "Finger_snapping", "Fireworks", "Flute", "Glockenspiel", "Gong", "Gunshot_or_gunfire", "Harmonica", "Hi-hat", "Keys_jangling", "Knock", "Laughter", "Meow", "Microwave_oven", "Oboe", "Saxophone", "Scissors", "Shatter", "Snare_drum", "Squeak", "Tambourine", "Tearing", "Telephone", "Trumpet", "Violin_or_fiddle", "Writing".

    Some other relevant characteristics of FSDKaggle2018:

    • The dataset is split into a train set and a test set.

    • The train set is meant to be for system development and includes ~9.5k samples unequally distributed among 41 categories. The minimum number of audio samples per category in the train set is 94, and the maximum 300. The duration of the audio samples ranges from 300ms to 30s due to the diversity of the sound categories and the preferences of Freesound users when recording sounds. The total duration of the train set is roughly 18h.

    • Out of the ~9.5k samples from the train set, ~3.7k have manually-verified ground truth annotations and ~5.8k have non-verified annotations. The non-verified annotations of the train set have a quality estimate of at least 65-70% in each category. Checkout the Data labeling process section below for more information about this aspect.

    • Non-verified annotations in the train set are properly flagged in train.csv so that participants can opt to use this information during the development of their systems.

    • The test set is composed of 1.6k samples with manually-verified annotations and with a similar category distribution than that of the train set. The total duration of the test set is roughly 2h.

    • All audio samples in this dataset have a single label (i.e. are only annotated with one label). Checkout the Data labeling process section below for more information about this aspect. A single label should be predicted for each file in the test set.

    Data labeling process

    The data labeling process started from a manual mapping between Freesound tags and AudioSet Ontology categories (or labels), which was carried out by researchers at the Music Technology Group, Universitat Pompeu Fabra, Barcelona. Using this mapping, a number of Freesound audio samples were automatically annotated with labels from the AudioSet Ontology. These annotations can be understood as weak labels since they express the presence of a sound category in an audio sample.

    Then, a data validation process was carried out in which a number of participants did listen to the annotated sounds and manually assessed the presence/absence of an automatically assigned sound category, according to the AudioSet category description.

    Audio samples in FSDKaggle2018 are only annotated with a single ground truth label (see train.csv). A total of 3,710 annotations included in the train set of FSDKaggle2018 are annotations that have been manually validated as present and predominant (some with inter-annotator agreement but not all of them). This means that in most cases there is no additional acoustic material other than the labeled category. In few cases there may be some additional sound events, but these additional events won't belong to any of the 41 categories of FSDKaggle2018.

    The rest of the annotations have not been manually validated and therefore some of them could be inaccurate. Nonetheless, we have estimated that at least 65-70% of the non-verified annotations per category in the train set are indeed correct. It can happen that some of these non-verified audio samples present several sound sources even though only one label is provided as ground truth. These additional sources are typically out of the set of the 41 categories, but in a few cases they could be within.

    More details about the data labeling process can be found in [3].

    License

    FSDKaggle2018 has licenses at two different levels, as explained next.

    All sounds in Freesound are released under Creative Commons (CC) licenses, and each audio clip has its own license as defined by the audio clip uploader in Freesound. For attribution purposes and to facilitate attribution of these files to third parties, we include a relation of the audio clips included in FSDKaggle2018 and their corresponding license. The licenses are specified in the files train_post_competition.csv and test_post_competition_scoring_clips.csv.

    In addition, FSDKaggle2018 as a whole is the result of a curation process and it has an additional license. FSDKaggle2018 is released under CC-BY. This license is specified in the LICENSE-DATASET file downloaded with the FSDKaggle2018.doc zip file.

    Files

    FSDKaggle2018 can be downloaded as a series of zip files with the following directory structure:

    root
    │
    └───FSDKaggle2018.audio_train/ Audio clips in the train set │
    └───FSDKaggle2018.audio_test/ Audio clips in the test set │
    └───FSDKaggle2018.meta/ Files for evaluation setup │ │
    │ └───train_post_competition.csv Data split and ground truth for the train set │ │
    │ └───test_post_competition_scoring_clips.csv Ground truth for the test set

    └───FSDKaggle2018.doc/ │
    └───README.md The dataset description file you are reading │
    └───LICENSE-DATASET

  15. ABoVE: Active Layer Soil Characterization of Permafrost Sites, Northern...

    • data.staging.idas-ds1.appdat.jsc.nasa.gov
    • data.nasa.gov
    Updated Mar 20, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nasa.gov (2025). ABoVE: Active Layer Soil Characterization of Permafrost Sites, Northern Alaska, 2018 - Dataset - NASA Open Data Portal [Dataset]. https://data.staging.idas-ds1.appdat.jsc.nasa.gov/dataset/above-active-layer-soil-characterization-of-permafrost-sites-northern-alaska-2018-f1c16
    Explore at:
    Dataset updated
    Mar 20, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Area covered
    Arctic Alaska, Alaska
    Description

    This dataset provides in situ soil measurements including soil dielectric properties, temperature, and moisture profiles, active layer thickness (ALT), and measurements of soil organic matter, bulk density, porosity, texture, and coarse root biomass. Samples were collected from the surface to permafrost table in soil pits at selected sites along the Dalton Highway in Northern Alaska. From North to South, the study sites include Franklin Bluffs, Sagwon, Happy Valley, Ice Cut, and Imnavait Creek. Measurements were made from August 22 to August 26, 2018. The purpose of the field campaign was to characterize the dielectric properties of permafrost active layer soils in support of the NASA Arctic and Boreal Vulnerability Experiment (ABoVE) Airborne Campaign.

  16. s

    Statistical Regions and Provincial Data Collection 2018 - Datasets - This...

    • store.smartdatahub.io
    Updated Nov 11, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Statistical Regions and Provincial Data Collection 2018 - Datasets - This service has been deprecated - please visit https://www.smartdatahub.io/ to access data. See the About page for details. // [Dataset]. https://store.smartdatahub.io/dataset/fi_tilastokeskus_tilastointialueet_maakunta4500k_2018
    Explore at:
    Dataset updated
    Nov 11, 2024
    Description

    This dataset collection consists of related data tables sourced from the Finnish website, Tilastokeskus (Statistics Finland). The data includes statistical information about various regions in Finland, as provided via the Tilastokeskus web service interface. These tables collectively offer a thorough depiction of statistical areas within the scope of the dataset, contributing to a comprehensive understanding of the corresponding subject matter. The data is organized in a tabular format to allow for easy analysis and interpretation. This dataset is licensed under CC BY 4.0 (Creative Commons Attribution 4.0, https://creativecommons.org/licenses/by/4.0/deed.fi).

  17. H

    Love Matters Sample Website Data 2018

    • dataverse.harvard.edu
    Updated May 22, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lindsay van Clief (2019). Love Matters Sample Website Data 2018 [Dataset]. http://doi.org/10.7910/DVN/WXUOA2
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 22, 2019
    Dataset provided by
    Harvard Dataverse
    Authors
    Lindsay van Clief
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This is a sampling of google analytic data from the Love Matters websites in India, Mexico, Kenya, Nigeria, and Egypt. Love Matters is a program of RNW Media (www.rnw.org)

  18. Requirements data sets (user stories)

    • zenodo.org
    • data.mendeley.com
    txt
    Updated Jan 13, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Fabiano Dalpiaz; Fabiano Dalpiaz (2025). Requirements data sets (user stories) [Dataset]. http://doi.org/10.17632/7zbk8zsd8y.1
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jan 13, 2025
    Dataset provided by
    Mendeley Ltd.
    Authors
    Fabiano Dalpiaz; Fabiano Dalpiaz
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    A collection of 22 data set of 50+ requirements each, expressed as user stories.

    The dataset has been created by gathering data from web sources and we are not aware of license agreements or intellectual property rights on the requirements / user stories. The curator took utmost diligence in minimizing the risks of copyright infringement by using non-recent data that is less likely to be critical, by sampling a subset of the original requirements collection, and by qualitatively analyzing the requirements. In case of copyright infringement, please contact the dataset curator (Fabiano Dalpiaz, f.dalpiaz@uu.nl) to discuss the possibility of removal of that dataset [see Zenodo's policies]

    The data sets have been originally used to conduct experiments about ambiguity detection with the REVV-Light tool: https://github.com/RELabUU/revv-light

    This collection has been originally published in Mendeley data: https://data.mendeley.com/datasets/7zbk8zsd8y/1

    Overview of the datasets [data and links added in December 2024]

    The following text provides a description of the datasets, including links to the systems and websites, when available. The datasets are organized by macro-category and then by identifier.

    Public administration and transparency

    g02-federalspending.txt (2018) originates from early data in the Federal Spending Transparency project, which pertain to the website that is used to share publicly the spending data for the U.S. government. The website was created because of the Digital Accountability and Transparency Act of 2014 (DATA Act). The specific dataset pertains a system called DAIMS or Data Broker, which stands for DATA Act Information Model Schema. The sample that was gathered refers to a sub-project related to allowing the government to act as a data broker, thereby providing data to third parties. The data for the Data Broker project is currently not available online, although the backend seems to be hosted in GitHub under a CC0 1.0 Universal license. Current and recent snapshots of federal spending related websites, including many more projects than the one described in the shared collection, can be found here.

    g03-loudoun.txt (2018) is a set of extracted requirements from a document, by the Loudoun County Virginia, that describes the to-be user stories and use cases about a system for land management readiness assessment called Loudoun County LandMARC. The source document can be found here and it is part of the Electronic Land Management System and EPlan Review Project - RFP RFQ issued in March 2018. More information about the overall LandMARC system and services can be found here.

    g04-recycling.txt(2017) concerns a web application where recycling and waste disposal facilities can be searched and located. The application operates through the visualization of a map that the user can interact with. The dataset has obtained from a GitHub website and it is at the basis of a students' project on web site design; the code is available (no license).

    g05-openspending.txt (2018) is about the OpenSpending project (www), a project of the Open Knowledge foundation which aims at transparency about how local governments spend money. At the time of the collection, the data was retrieved from a Trello board that is currently unavailable. The sample focuses on publishing, importing and editing datasets, and how the data should be presented. Currently, OpenSpending is managed via a GitHub repository which contains multiple sub-projects with unknown license.

    g11-nsf.txt (2018) refers to a collection of user stories referring to the NSF Site Redesign & Content Discovery project, which originates from a publicly accessible GitHub repository (GPL 2.0 license). In particular, the user stories refer to an early version of the NSF's website. The user stories can be found as closed Issues.

    (Research) data and meta-data management

    g08-frictionless.txt (2016) regards the Frictionless Data project, which offers an open source dataset for building data infrastructures, to be used by researchers, data scientists, and data engineers. Links to the many projects within the Frictionless Data project are on GitHub (with a mix of Unlicense and MIT license) and web. The specific set of user stories has been collected in 2016 by GitHub user @danfowler and are stored in a Trello board.

    g14-datahub.txt (2013) concerns the open source project DataHub, which is currently developed via a GitHub repository (the code has Apache License 2.0). DataHub is a data discovery platform which has been developed over multiple years. The specific data set is an initial set of user stories, which we can date back to 2013 thanks to a comment therein.

    g16-mis.txt (2015) is a collection of user stories that pertains a repository for researchers and archivists. The source of the dataset is a public Trello repository. Although the user stories do not have explicit links to projects, it can be inferred that the stories originate from some project related to the library of Duke University.

    g17-cask.txt (2016) refers to the Cask Data Application Platform (CDAP). CDAP is an open source application platform (GitHub, under Apache License 2.0) that can be used to develop applications within the Apache Hadoop ecosystem, an open-source framework which can be used for distributed processing of large datasets. The user stories are extracted from a document that includes requirements regarding dataset management for Cask 4.0, which includes the scenarios, user stories and a design for the implementation of these user stories. The raw data is available in the following environment.

    g18-neurohub.txt (2012) is concerned with the NeuroHub platform, a neuroscience data management, analysis and collaboration platform for researchers in neuroscience to collect, store, and share data with colleagues or with the research community. The user stories were collected at a time NeuroHub was still a research project sponsored by the UK Joint Information Systems Committee (JISC). For information about the research project from which the requirements were collected, see the following record.

    g22-rdadmp.txt (2018) is a collection of user stories from the Research Data Alliance's working group on DMP Common Standards. Their GitHub repository contains a collection of user stories that were created by asking the community to suggest functionality that should part of a website that manages data management plans. Each user story is stored as an issue on the GitHub's page.

    g23-archivesspace.txt (2012-2013) refers to ArchivesSpace: an open source, web application for managing archives information. The application is designed to support core functions in archives administration such as accessioning; description and arrangement of processed materials including analog, hybrid, and
    born digital content; management of authorities and rights; and reference service. The application supports collection management through collection management records, tracking of events, and a growing number of administrative reports. ArchivesSpace is open source and its

  19. w

    2018 Open Data Plan: FOIL Datasets

    • data.wu.ac.at
    csv, json, xml
    Updated Oct 4, 2018
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NYC Open Data (2018). 2018 Open Data Plan: FOIL Datasets [Dataset]. https://data.wu.ac.at/schema/data_ny_gov/c2pkaS1hNnVz
    Explore at:
    csv, xml, jsonAvailable download formats
    Dataset updated
    Oct 4, 2018
    Dataset provided by
    NYC Open Data
    Description

    Local Law 7 of 2016 requires agencies to “review responses to freedom of information law [FOIL] requests that include the release of data to determine if such responses consist of or include public data sets that have not yet been included on the single web portal or the inclusion” on the Open Data Portal. Additionally, each City agency shall disclose “the total number, since the last update, of such agency’s freedom of information law responses that included the release of data, the total number of such responses determined to consist of or include a public data set that had not yet been included on the single web portal and the name of such public data set, where applicable, and the total number of such responses that resulted in voluntarily disclosed information being made accessible through the single web portal.”

    See the agency summary statistics on data released in responses to FOIL requests here: https://data.cityofnewyork.us/City-Government/2018- Open-Data-Plan-FOIL-Report/cvse-perd

    See the 2018 Open Data for All Report and Open Data Plan here: https://opendata.cityofnewyork.us/wp-content/uploads/2018/09/2018-NYC-OD4A-report.pdf

  20. Italian State Libraries 1994-2018, focus 2000-2018

    • kaggle.com
    Updated Feb 16, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Roberto Lofaro (2021). Italian State Libraries 1994-2018, focus 2000-2018 [Dataset]. https://www.kaggle.com/robertolofaro/italian-state-libraries-19942018-focus-20002018/metadata
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Feb 16, 2021
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Roberto Lofaro
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Italy
    Description

    Context

    This dataset has been created to support further publication and analysis activities

    Content

    Full documentation on sources etc. of the two CSV from this dataset, along with the activities done to produce them from data available on the Statistical Office Website of the Ministero dei Beni Cultural - MIBACT is available within the Kaggle dataset

    Please refer to the companion Jupyter Notebook is delivered as a quick overview of the data, as the dataset has been created to support further publications and analysis using different algorithms.

    Limitations:

    1. See the "Data coverage" section- the CSVs contain the data available (summaries from 1994 until 2018; details by Regione/Provincia, political territorial subdivisions in Italy, from 1998 until 2018)
    2. As the data concerning Bologna have been removed from 2000 (transferred to another Ministry, see explanation within the "Data coverage" section), all the further analyses will be done for the timespan 2000-2018

    Source: Section "Rilevazioni e data statistici" on the Statistical Office Website of MIBACT - focus: "Biblioteche Pubbliche Statali" (State-owned public libraries in Italy)

    Data coverage: the datasets cover 1994-2018 for the summary, as those data were available, and 1998-2018 for the details.

    As stated by the source website: "Le unità statistiche di riferimento di questa Rilevazione sono rappresentate dalle 46 Biblioteche Pubbliche Statali, indicate dal D.P.R.5/7/1995, n. 417, modificato dal D.M. del 12/06/2000, che ha disposto il trasferimento della Biblioteca Universitaria di Bologna (BUB) al MURST.".

    Or: from 2000, the Bologna library has been transferred to another ministry.

    Furthermore, "I dati di questa Rilevazione, disponibili in questa pagina web, riguardano la consistenza del materiale bibliografico, le consultazioni, i prestiti, il personale e le spese di gestione a partire dal 1999."

    Or: the data are collected from 1999.

    Therefore, while the two CSVs include all the data available: 1. the "BibliotecheStatali_01_published.csv" file contains data from the "Dati storici (quinquennali)" available for 1998-2018 (the 1998 file extends back to 1994); each file contained the data for the current year, plus the four previous years, extending back in time if there was a re-assessment of prior data

    following a common practice in business, the data used have been the latest version of each year, i.e. the CSV has been created by using the data from the 2018 "Dati storici (quinquennali)", and then going back in time; the column "Source" clearly states the source table from the website

    two columns from the original files have been ignored:

    "Spese di gestione", i.e. costs- as it is not within the scope of the publications and analyses

    "Personale", i.e. personnel, as instead have been used the details within "Tavola 1. Consistenza del materiale, consultazioni, prestiti e personale (Dati per Provincia)" (represented within the other file)

    1. the "BibliotecheStatali_01_Tavola1_published.csv" file contains data from the "Tavola 1. Consistenza del materiale, consultazioni, prestiti e personale (Dati per Provincia)", available for 1998-2018

    to ensure consistency, as e.g. the RIETI library data was not available for a number of years, and, as stated above, the BOLOGNA library was removed from 2000, a line containing "empty" has been added to keep both RIETI and BOLOGNA

    to ensure consistency, as the "not available" was sometimes in the data as either a left- or right-aligned "-", or a "...", it has been replaced by "empty".

    Acknowledgements

    Thanks to the publisher of the data

    Inspiration

    Too many to list

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Statista (2025). Concerns over the protection of personal data by websites in Sweden 2018 [Dataset]. https://www.statista.com/statistics/498171/concerns-over-the-protection-of-personal-data-by-websites-in-sweden/
Organization logo

Concerns over the protection of personal data by websites in Sweden 2018

Explore at:
Dataset updated
Jul 11, 2025
Dataset authored and provided by
Statistahttp://statista.com/
Time period covered
Oct 2019
Area covered
Sweden
Description

The majority of the Swedes who took part in a survey conducted on 2019, stated they were concerned that their online information was not kept secure by websites (** percent). ** percent of the respondents disagreed with that statement.

Search
Clear search
Close search
Google apps
Main menu