35 datasets found
  1. Filter (Mature)

    • data-salemva.opendata.arcgis.com
    Updated Jul 3, 2014
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    esri_en (2014). Filter (Mature) [Dataset]. https://data-salemva.opendata.arcgis.com/items/1bdcdf930b4345dfb4db10f795e0c726
    Explore at:
    Dataset updated
    Jul 3, 2014
    Dataset provided by
    Esrihttp://esri.com/
    Authors
    esri_en
    Description

    Filter is a configurable app template that displays a map with an interactive filtered view of one or more feature layers. The application displays prompts and hints for attribute filter values which are used to locate specific features.Use CasesFilter displays an interactive dialog box for exploring the distribution of a single attribute or the relationship between different attributes. This is a good choice when you want to understand the distribution of different types of features within a layer, or create an experience where you can gain deeper insight into how the interaction of different variables affect the resulting map content.Configurable OptionsFilter can present a web map and be configured with the following options:Choose the web map used in the application.Provide a title and color theme. The default title is the web map name.Configure the ability for feature and location search.Define the filter experince and provide text to encourage user exploration of data by displaying additional values to choose as the filter text.Supported DevicesThis application is responsively designed to support use in browsers on desktops, mobile phones, and tablets.Data RequirementsRequires at least one layer with an interactive filter. See Apply Filters help topic for more details.Get Started This application can be created in the following ways:Click the Create a Web App button on this pageShare a map and choose to Create a Web AppOn the Content page, click Create - App - From Template Click the Download button to access the source code. Do this if you want to host the app on your own server and optionally customize it to add features or change styling.

  2. d

    Metadata and antimicrobial resistance gene count data from dusts collected...

    • search.dataone.org
    • datadryad.org
    Updated Dec 18, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Paul George; Florent Rossi; Marc Veillette; Amélia Bélanger Cayouette; Samantha Leclerc; Cindy Dumais; Nathalie Turgeon; Caroline Duchaine (2024). Metadata and antimicrobial resistance gene count data from dusts collected on Canadian vehicle filters [Dataset]. http://doi.org/10.5061/dryad.69p8cz9cx
    Explore at:
    Dataset updated
    Dec 18, 2024
    Dataset provided by
    Dryad Digital Repository
    Authors
    Paul George; Florent Rossi; Marc Veillette; Amélia Bélanger Cayouette; Samantha Leclerc; Cindy Dumais; Nathalie Turgeon; Caroline Duchaine
    Area covered
    Canada
    Description

    The role of bioaerosols in the dispersal of antimicrobial resistance genes (ARGs) and resistant microorganisms is poorly understood. In addition, bioaerosols are powerful composite samples representative of the surrounding environment and can be used as sentinels of many local habitats. Evidence suggests that using environmental DNA from dust collected on vehicle cabin air filters can define regional resistance profiles. Here, this method was used to investigate differences in resistance gene profiles, their underlying bacterial communities, and their links to anthropogenic and environmental variables across Canada. In total, 477 car filter samples were collected, with every province and territory being represented. DNA was extracted from filter dust. High†throughput qPCR was used to detect and quantify a panel of 36 ARGs and 3 mobile genetic elements. Bacterial biomass was assessed using standard qPCR methods of the 16S rRNA gene, which was also used to assess bacterial biodiversity vi..., In total, 477 vehicle cabin air filters were collected from 51 locations across Canada, through a network of participating mechanics, individuals, and municipal governments. Two additional filters were collected as a methodological control. We asked that filters be collected during routine vehicle maintenance and placed in sterile resealable plastic bags for transport to the Institut universitaire de cardiologie et de pneumologie de Québec – Université Laval in Quebec City, QC, Canada for processing. We also asked participants to include the odometer reading (km) since last change and the forward sorting area of their postal code for environmental and population metadata collection. This data is not presented to preserve anonymity. Filters were categorised into the 6 geographical regions of Canada as defined by Statistics Canada: British Columbia, Prairies, Ontario, Quebec, Atlantic, Territories. Filters were collected between summer 2020 and winter 2021. During this time, travel restri..., , # Metadata and antimicrobial resistance gene count data from dusts collected on Canadian vehicle filters

    https://doi.org/10.5061/dryad.69p8cz9cx

    Description of the data and file structure

    Files and variables

    File: TableS2.xlsx

    Description:Â Environmental metadata for each filter collected in this study. Please note that identifying data like city, postal code, latitude, and longitude are not presented so as to preserve participant anonymity.

    Variables
    • Filter: The ID number of each individual filter collected.
    • Province: The Canadian province or territory in which each filter was collected using official two-letter abbreviations (BC = British Columbia, AB = Alberta, SK = Saskatchewan, MB = Manitoba, ON = Ontario, QC = Québec, NB = New Brunswick, NS = Nova Scotia, PE = Prince Edward Island, NL = Newfoundland and Labrador, YK = Yukon, NT = Northwest Territotries, NU = Nunavut).
    • KM_driven: The odometer reading since...
  3. Z

    Dataset to accompany publication "Re-defining non-tracking solar cell...

    • data.niaid.nih.gov
    Updated Mar 30, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bowman, Alan Richard; Stranks, Sam; Tagliabue, Giulia (2025). Dataset to accompany publication "Re-defining non-tracking solar cell efficiency limits with directional spectral filters" [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_15020699
    Explore at:
    Dataset updated
    Mar 30, 2025
    Dataset provided by
    École Polytechnique Fédérale de Lausanne
    Swiss Federal Institute of Technology in Lausanne
    Authors
    Bowman, Alan Richard; Stranks, Sam; Tagliabue, Giulia
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset accompanies the publication "Re-defining non-tracking solar cell efficiency limits with directional spectral filters" published in ACS Photonics (10.1021/acsphotonics.4c02181). The data can be used to reproduce figures 2-4 in the main text and all plots with data in the supporting information (noting figure 1 in the main text is only schematics). All data was generated via home-built modelling codes. All files are in .CSV and easily readable. The abstract for the associated paper is as follows:

    Optical filters that respond to the wavelength and direction of incident light can be used to increase the efficiency of tracking solar cells. However, as tracking solar cells are more expensive to install and maintain, it is likely that non-tracking solar cells will remain the main product of the (terrestrial) solar cell industry. Here we demonstrate that wavelength and directionally selective filters can also be used to increase the efficiency limit of non-tracking solar cells at the equator beyond what is currently understood by up to ~ 0.5 % (relative ~ 1.8 %). We also reveal that such filters can be used to regulate the energy output of solar cells throughout a day or year, and can reduce the thickness of the absorber layer by up to 40 %. We anticipate that similar gains would be seen at other latitudes. As this filter has complex wavelength-direction functionality, we present a proof-of-concept design based on Luneburg lenses, demonstrating these filters can be realized. Our results will enable solar cells with higher efficiency and more stable output while using less material.

  4. P001. ML Course KU 3927-23-00-00

    • kaggle.com
    zip
    Updated Jun 2, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Cameron MacPherson (2023). P001. ML Course KU 3927-23-00-00 [Dataset]. https://www.kaggle.com/datasets/cameronmacpherson/ml-course-ku-3927220000-p001
    Explore at:
    zip(710526135 bytes)Available download formats
    Dataset updated
    Jun 2, 2023
    Authors
    Cameron MacPherson
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Purpose & context This is a public dataset to be used as the community wants. The dataset was contributed for the purpose of running the University of Copenhagen course entitled, "Hackathon – application of machine learning in biomedical research". The course is identified by its activity number, 3927-23-00-00. The first part of the course included introductory training into machine learning for those with a minimal understanding of Python this course can be found here along with teaching materials on Github.

    Abstract While tissue type can be defined naturally, the notion of cell type is less well-defined. Still, it’s key in our thinking and much research. We could define it loosely by a state of the cell. The state could for instance be given by the cell’s transcriptome: We can use single cell RNA sequencing to reveal the combined information of how every gene in a cell is expressed and thereby a state of the cell. The line of thought is that the transcriptomes are far from random but adheres to certain patterns, which we call cell types.

    But gene expression is not alone, it is accompanied by a host of epi-genetic states. Some of these can be measured even at the single cell level. One such is chromatin accessibility, which we obtain for single cells by a method called Assay for Transposase-Accessible Chromatin or ATAC-seq.

    Chromatin accessibility accompanies gene expression, since it is an important regulator of gene expression. Genes in genomic areas of open chromatin are more likely to be transcribed compared to genes located in closed chromatin. At the same time chromatin accessibility can be used to identify regulatory DNA elements that contain transcription factor binding sites, which can be bound by transcription factors and other regulatory proteins to facilitate expression of a nearby gene. Chromatin accessibility of these regulatory DNA elements is typically cell type-specific and can provide important information on how a given gene is being regulated in a given cell population.

    Whereas clustering approaches based on RNA-seq gene expression data can assign a single-cell transcriptome to a given known cell type identity (for example classifying a given cell based on its marker genes as an adipocyte), it is remains less straightforward to assign a cell to a given cell type identity solely based on single-cell chromatin accessibility data.

    Research questions First, based on single-cell chromatin accessibility data, assign cells to their correct cell type.

    This should consist in a method for partitioning a given cell population into subsets of cells that are in some way ‘similar’. Here the notion of similarity is up to the participants. So, put differently, the task is: to obtain a method for clustering a cell population based on some type of similarity of chromatin accessibility.

    Second, based on single-cell gene expression data (scRNA sequencing), assign cells to their correct cell type. As for the chromatin accessibility, this must be based upon a notion of similarity of the cells, now using the scRNA seq information.

    It is possible from chromatin accessibility data to derive ‘proxy RNA-seq data’ by aggregating counts in regions around genes’ transcription start sites. We will provide such ‘gene activity’ data too. You can then apply your scRNA-seq based clustering to that set of data also.

    Biological labelling of the clusters is not expected (i.e. clusters will be labelled by some arbitrary index), but ideas are highly appreciated.

    Tasks – not set in stone, you can add/replace with your own questions: Obtain a method for clustering cells based on chromatin accessibility Obtain a method for clustering cells based on scRNA seq/gene activity derived from chromatin accessibility data Compare the clusterings from 1 and 2: The provided data will give both gene expression and chromatin accessibility in each cell. How ‘robust’ are your methods wrt the preprocessing/filtering of the data (we provide a set of filtered data, but you can have a more ‘raw’ set too; you can then try out your own filtering methods and check their effects). Investigate how your methods handle rare cell types. This can be done by down-sampling one of the (larger) clusters you have derived by applying your methods (from 1 or 2) to the full data set. Will a rare cell type be lost? (Or) Can it affect the clustering of the rest of the cells? Is gene activity a good proxy for RNA seq?

    Try to answer 1-3 and at least one of 4-6.

    Description of the data Multimodal single-cell data where chromatin accessibility and gene expression are measured in the same cell. Across thousands of cells, t...

  5. Data from: Electroencephalography Responses to Simplified Visual Signals...

    • zenodo.org
    bin, zip
    Updated Aug 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Enrico Varano; Enrico Varano; Tobias Reichenbach; Tobias Reichenbach (2023). Data from: Electroencephalography Responses to Simplified Visual Signals Reveal Explain Differences in Speech-in-Noise Comprehension [Dataset]. http://doi.org/10.5281/zenodo.8298239
    Explore at:
    zip, binAvailable download formats
    Dataset updated
    Aug 30, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Enrico Varano; Enrico Varano; Tobias Reichenbach; Tobias Reichenbach
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Contents and Folder Structure:

    EEG Experiment

    • EEG_stimuli: these are the videos that were presented to participants in the EEG experiment, and the code that generates them from the original corpus (link)
    • data_2020>split_trials: contains the raw EEG data starting 1.995s before each trial and ending 1.995s after each trial with naming convention subXx_VV_YY_N.fif where X or Xx is the subject number, YY is the modality condition (AV for audiovisual and V0 for video only), N is the trial number (between 0 and 4 inclusive), and VV is the video condition (1e for the envelope dot, 1m is the mismatched dot, 4v is the cartoon, bw is the edge detection and nh is the natural condition).
      • unprocessed>raw: contains the unprocessed raw EEG data
        • processed>Fs-200>BP-1-80-ASR-INTP-AVR: contains the pre-processed raw EEG data: the output of run_preprocessing.m
        • processed>Fs-200>BP-1-80-ASR-INTP-AVR-ICr: contains the pre-processed raw EEG data after ICA cleaning: the output of run_reject_ICs.m
        • stim>stim_dwnspl: contains the aligned 200Hz envelopes of the presented speech used as features for the time-lagged models
    • EEG_analysis_code [note: please extract the contents of this folder to match paths]
      • 2_ICA_filt: this folder contains the MATLAB code that performs the pre-processing of the EEG data, including filtering, downsampling, ICA cleaning etc. The main functions are:
        • run_preprocessing.m: downsampling, filtering, ASR cleaning
        • run_reject_ICs.m: ICLabel ICA cleaning
      • 3_analysis: this is the Python code that performs the TRF and backward modelling on the EEG data. The main functions are:
        • multisensory_bw.py: backwards model
        • multisensory_fw.py: forwards model

    Behavioural Experiment

    • behavioural
      • 0_dataset: these are the videos that were presented to participants in the behavioural experiment, and the code that generates them from the original corpus (AV GRID corpus)
      • 3_analysis: behavioural data analysis script
        • main function: data_grid_v3.py
    • behavioural_data>data_grid: behavioural results
  6. Z

    Conceptualization of public data ecosystems

    • data.niaid.nih.gov
    Updated Sep 26, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anastasija, Nikiforova; Martin, Lnenicka (2024). Conceptualization of public data ecosystems [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_13842001
    Explore at:
    Dataset updated
    Sep 26, 2024
    Dataset provided by
    University of Hradec Králové
    University of Tartu
    Authors
    Anastasija, Nikiforova; Martin, Lnenicka
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains data collected during a study "Understanding the development of public data ecosystems: from a conceptual model to a six-generation model of the evolution of public data ecosystems" conducted by Martin Lnenicka (University of Hradec Králové, Czech Republic), Anastasija Nikiforova (University of Tartu, Estonia), Mariusz Luterek (University of Warsaw, Warsaw, Poland), Petar Milic (University of Pristina - Kosovska Mitrovica, Serbia), Daniel Rudmark (Swedish National Road and Transport Research Institute, Sweden), Sebastian Neumaier (St. Pölten University of Applied Sciences, Austria), Karlo Kević (University of Zagreb, Croatia), Anneke Zuiderwijk (Delft University of Technology, Delft, the Netherlands), Manuel Pedro Rodríguez Bolívar (University of Granada, Granada, Spain).

    As there is a lack of understanding of the elements that constitute different types of value-adding public data ecosystems and how these elements form and shape the development of these ecosystems over time, which can lead to misguided efforts to develop future public data ecosystems, the aim of the study is: (1) to explore how public data ecosystems have developed over time and (2) to identify the value-adding elements and formative characteristics of public data ecosystems. Using an exploratory retrospective analysis and a deductive approach, we systematically review 148 studies published between 1994 and 2023. Based on the results, this study presents a typology of public data ecosystems and develops a conceptual model of elements and formative characteristics that contribute most to value-adding public data ecosystems, and develops a conceptual model of the evolutionary generation of public data ecosystems represented by six generations called Evolutionary Model of Public Data Ecosystems (EMPDE). Finally, three avenues for a future research agenda are proposed.

    This dataset is being made public both to act as supplementary data for "Understanding the development of public data ecosystems: from a conceptual model to a six-generation model of the evolution of public data ecosystems ", Telematics and Informatics*, and its Systematic Literature Review component that informs the study.

    Description of the data in this data set

    PublicDataEcosystem_SLR provides the structure of the protocol

    Spreadsheet#1 provides the list of results after the search over three indexing databases and filtering out irrelevant studies

    Spreadsheets #2 provides the protocol structure.

    Spreadsheets #3 provides the filled protocol for relevant studies.

    The information on each selected study was collected in four categories:(1) descriptive information,(2) approach- and research design- related information,(3) quality-related information,(4) HVD determination-related information

    Descriptive Information

    Article number

    A study number, corresponding to the study number assigned in an Excel worksheet

    Complete reference

    The complete source information to refer to the study (in APA style), including the author(s) of the study, the year in which it was published, the study's title and other source information.

    Year of publication

    The year in which the study was published.

    Journal article / conference paper / book chapter

    The type of the paper, i.e., journal article, conference paper, or book chapter.

    Journal / conference / book

    Journal article, conference, where the paper is published.

    DOI / Website

    A link to the website where the study can be found.

    Number of words

    A number of words of the study.

    Number of citations in Scopus and WoS

    The number of citations of the paper in Scopus and WoS digital libraries.

    Availability in Open Access

    Availability of a study in the Open Access or Free / Full Access.

    Keywords

    Keywords of the paper as indicated by the authors (in the paper).

    Relevance for our study (high / medium / low)

    What is the relevance level of the paper for our study

    Approach- and research design-related information

    Approach- and research design-related information

    Objective / Aim / Goal / Purpose & Research Questions

    The research objective and established RQs.

    Research method (including unit of analysis)

    The methods used to collect data in the study, including the unit of analysis that refers to the country, organisation, or other specific unit that has been analysed such as the number of use-cases or policy documents, number and scope of the SLR etc.

    Study’s contributions

    The study’s contribution as defined by the authors

    Qualitative / quantitative / mixed method

    Whether the study uses a qualitative, quantitative, or mixed methods approach?

    Availability of the underlying research data

    Whether the paper has a reference to the public availability of the underlying research data e.g., transcriptions of interviews, collected data etc., or explains why these data are not openly shared?

    Period under investigation

    Period (or moment) in which the study was conducted (e.g., January 2021-March 2022)

    Use of theory / theoretical concepts / approaches? If yes, specify them

    Does the study mention any theory / theoretical concepts / approaches? If yes, what theory / concepts / approaches? If any theory is mentioned, how is theory used in the study? (e.g., mentioned to explain a certain phenomenon, used as a framework for analysis, tested theory, theory mentioned in the future research section).

    Quality-related information

    Quality concerns

    Whether there are any quality concerns (e.g., limited information about the research methods used)?

    Public Data Ecosystem-related information

    Public data ecosystem definition

    How is the public data ecosystem defined in the paper and any other equivalent term, mostly infrastructure. If an alternative term is used, how is the public data ecosystem called in the paper?

    Public data ecosystem evolution / development

    Does the paper define the evolution of the public data ecosystem? If yes, how is it defined and what factors affect it?

    What constitutes a public data ecosystem?

    What constitutes a public data ecosystem (components & relationships) - their "FORM / OUTPUT" presented in the paper (general description with more detailed answers to further additional questions).

    Components and relationships

    What components does the public data ecosystem consist of and what are the relationships between these components? Alternative names for components - element, construct, concept, item, helix, dimension etc. (detailed description).

    Stakeholders

    What stakeholders (e.g., governments, citizens, businesses, Non-Governmental Organisations (NGOs) etc.) does the public data ecosystem involve?

    Actors and their roles

    What actors does the public data ecosystem involve? What are their roles?

    Data (data types, data dynamism, data categories etc.)

    What data do the public data ecosystem cover (is intended / designed for)? Refer to all data-related aspects, including but not limited to data types, data dynamism (static data, dynamic, real-time data, stream), prevailing data categories / domains / topics etc.

    Processes / activities / dimensions, data lifecycle phases

    What processes, activities, dimensions and data lifecycle phases (e.g., locate, acquire, download, reuse, transform, etc.) does the public data ecosystem involve or refer to?

    Level (if relevant)

    What is the level of the public data ecosystem covered in the paper? (e.g., city, municipal, regional, national (=country), supranational, international).

    Other elements or relationships (if any)

    What other elements or relationships does the public data ecosystem consist of?

    Additional comments

    Additional comments (e.g., what other topics affected the public data ecosystems and their elements, what is expected to affect the public data ecosystems in the future, what were important topics by which the period was characterised etc.).

    New papers

    Does the study refer to any other potentially relevant papers?

    Additional references to potentially relevant papers that were found in the analysed paper (snowballing).

    Format of the file.xls, .csv (for the first spreadsheet only), .docx

    Licenses or restrictionsCC-BY

    For more info, see README.txt

  7. r

    Specification and optimization of analytical data flows

    • resodate.org
    Updated May 27, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Fabian Hüske (2016). Specification and optimization of analytical data flows [Dataset]. http://doi.org/10.14279/depositonce-5150
    Explore at:
    Dataset updated
    May 27, 2016
    Dataset provided by
    Technische Universität Berlin
    DepositOnce
    Authors
    Fabian Hüske
    Description

    In the past, the majority of data analysis use cases was addressed by aggregating relational data. Since a few years, a trend is evolving, which is called “Big Data” and which has several implications on the field of data analysis. Compared to previous applications, much larger data sets are analyzed using more elaborate and diverse analysis methods such as information extraction techniques, data mining algorithms, and machine learning methods. At the same time, analysis applications include data sets with less or even no structure at all. This evolution has implications on the requirements on data processing systems. Due to the growing size of data sets and the increasing computational complexity of advanced analysis methods, data must be processed in a massively parallel fashion. The large number and diversity of data analysis techniques as well as the lack of data structure determine the use of user-defined functions and data types. Many traditional database systems are not flexible enough to satisfy these requirements. Hence, there is a need for programming abstractions to define and efficiently execute complex parallel data analysis programs that support custom user-defined operations. The success of the SQL query language has shown the advantages of declarative query specification, such as potential for optimization and ease of use. Today, most relational database management systems feature a query optimizer that compiles declarative queries into physical execution plans. Cost-based optimizers choose from billions of plan candidates the plan with the least estimated cost. However, traditional optimization techniques cannot be readily integrated into systems that aim to support novel data analysis use cases. For example, the use of user-defined functions (UDFs) can significantly limit the optimization potential of data analysis programs. Furthermore, lack of detailed data statistics is common when large amounts of unstructured data is analyzed. This leads to imprecise optimizer cost estimates, which can cause sub-optimal plan choices. In this thesis we address three challenges that arise in the context of specifying and optimizing data analysis programs. First, we propose a parallel programming model with declarative properties to specify data analysis tasks as data flow programs. In this model, data processing operators are composed of a system-provided second-order function and a user-defined first-order function. A cost-based optimizer compiles data flow programs specified in this abstraction into parallel data flows. The optimizer borrows techniques from relational optimizers and ports them to the domain of general-purpose parallel programming models. Second, we propose an approach to enhance the optimization of data flow programs that include UDF operators with unknown semantics. We identify operator properties and conditions to reorder neighboring UDF operators without changing the semantics of the program. We show how to automatically extract these properties from UDF operators by leveraging static code analysis techniques. Our approach is able to emulate relational optimizations such as filter and join reordering and holistic aggregation push-down while not being limited to relational operators. Finally, we analyze the impact of changing execution conditions such as varying predicate selectivities and memory budgets on the performance of relational query plans. We identify plan patterns that cause significantly varying execution performance for changing execution conditions. Plans that include such risky patterns are prone to cause problems in presence of imprecise optimizer estimates. Based on our findings, we introduce an approach to avoid risky plan choices. Moreover, we present a method to assess the risk of a query execution plan using a machine-learned prediction model. Experiments show that the prediction model outperforms risk predictions which are computed from optimizer estimates.

  8. p

    Japan Number Dataset

    • listtodata.com
    .csv, .xls, .txt
    Updated Jul 17, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    List to Data (2025). Japan Number Dataset [Dataset]. https://listtodata.com/japan-dataset
    Explore at:
    .csv, .xls, .txtAvailable download formats
    Dataset updated
    Jul 17, 2025
    Authors
    List to Data
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Time period covered
    Jan 1, 2025 - Dec 31, 2025
    Area covered
    Japan
    Variables measured
    phone numbers, Email Address, full name, Address, City, State, gender,age,income,ip address,
    Description

    Japan number dataset allows you to filter phone numbers based on different criteria. You can pick contacts by gender, age, and whether they are single or taken. This feature makes it easy for you to find the right contacts for your needs. We define this title so you can access the most relevant information. Additionally, we regularly remove invalid data to keep the list accurate and reliable. Also, using the Japan number dataset makes your search much simpler. You can easily find contacts that fit your specific needs. Following GDPR rules helps us respect everyone’s privacy while providing useful information. Moreover, we always remove invalid data to keep the list correct. This way, you get the most reliable contact numbers. Japan Phone Data contains contact numbers collected from trusted sources. We define this title to make sure you have reliable and correct information. You can check the source URLs to see where we got the data. Moreover, we provide support 24/7 to help you with any questions. We are always available to support you. Additionally, we only collect opt-in data. This means that everyone on the list has agreed to share their contact details. With Japan Phone Data, you can feel confident that you have the right information. We gather data from trusted sources to ensure every number is correct. If you have any questions, you can reach out for help anytime. We want to help you connect with others easily. The List to Data helps you to find contact information for businesses. Japan phone number list helps you find the right phone numbers easily. You can filter this list by gender, age, and relationship status. This feature helps narrow your search and find exactly what you need. We define this list to provide the best data. Additionally, we remove invalid data regularly to keep the list fresh. Using the Japan phone number list is simple and quick. You can find contacts that match your needs without any hassle. Furthermore, we work hard to remove invalid data so you only see valid numbers. This effort helps keep your searches accurate and efficient. Overall, this list is a great tool for connecting with people in Japan while respecting their privacy.

  9. Data from: S1 Dataset -

    • plos.figshare.com
    xlsx
    Updated Jun 2, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Qiwei Wang; Xiaoya Zhu; Manman Wang; Fuli Zhou; Shuang Cheng (2023). S1 Dataset - [Dataset]. http://doi.org/10.1371/journal.pone.0286034.s001
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Jun 2, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Qiwei Wang; Xiaoya Zhu; Manman Wang; Fuli Zhou; Shuang Cheng
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The coronavirus disease 2019 pandemic has impacted and changed consumer behavior because of a prolonged quarantine and lockdown. This study proposed a theoretical framework to explore and define the influencing factors of online consumer purchasing behavior (OCPB) based on electronic word-of-mouth (e-WOM) data mining and analysis. Data pertaining to e-WOM were crawled from smartphone product reviews from the two most popular online shopping platforms in China, Jingdong.com and Taobao.com. Data processing aimed to filter noise and translate unstructured data from complex text reviews into structured data. The machine learning based K-means clustering method was utilized to cluster the influencing factors of OCPB. Comparing the clustering results and Kotler’s five products level, the influencing factors of OCPB were clustered around four categories: perceived emergency context, product, innovation, and function attributes. This study contributes to OCPB research by data mining and analysis that can adequately identify the influencing factors based on e-WOM. The definition and explanation of these categories may have important implications for both OCPB and e-commerce.

  10. u

    Data from: Different mechanisms explain decoupled co-occurrence patterns of...

    • portalcientifico.unileon.es
    Updated 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tian, Chengzong; García‐Girón, Jorge; Kua, Zi Xun; Du, Xiaopei; Xiong, Fangyuan; Nistal-García, Alejandro; Zhou, Xiongdong; Xin, Wei; Li, Zhongyang; Tian, Chengzong; García‐Girón, Jorge; Kua, Zi Xun; Du, Xiaopei; Xiong, Fangyuan; Nistal-García, Alejandro; Zhou, Xiongdong; Xin, Wei; Li, Zhongyang (2025). Data from: Different mechanisms explain decoupled co-occurrence patterns of native and non-native macroinvertebrates [Dataset]. https://portalcientifico.unileon.es/documentos/68c41ac908c3ca034ca7a0f2
    Explore at:
    Dataset updated
    2025
    Authors
    Tian, Chengzong; García‐Girón, Jorge; Kua, Zi Xun; Du, Xiaopei; Xiong, Fangyuan; Nistal-García, Alejandro; Zhou, Xiongdong; Xin, Wei; Li, Zhongyang; Tian, Chengzong; García‐Girón, Jorge; Kua, Zi Xun; Du, Xiaopei; Xiong, Fangyuan; Nistal-García, Alejandro; Zhou, Xiongdong; Xin, Wei; Li, Zhongyang
    Description

    Biological invasion is a key driver of biodiversity loss, leading to significant changes in community composition and structure. Hence, understanding how biological invasions influence community assembly processes is crucial for identifying invasion mechanisms and developing management strategies aimed at minimizing their impacts on natural ecosystems. Beyond environmental filtering or niche-based exclusion, biotic interactions (e.g., interspecific competition) between invasive and their native counterparts can also affect species distributions and local invasion dynamics. This study combined joint Species Distribution Models (jSDMs) with a long-term European-level dataset to uncover co-occurrence patterns and community organization of freshwater macroinvertebrates in the context of biological invasion. To do this, we considered functional traits, phylogenetic relationships, environmental niches, and residual variance potentially mirroring species-to-species interactions between non-native and native species. Environmental covariates exhibited significant differences in explaining variation of occurrences between native and non-native species, although environmental filtering had a more pronounced effect on native species. This finding supported the hypothesis that non-native species generally exhibit broader environmental niches. Indeed, our findings emphasized the importance of biotic filtering (in the form of interspecific competition and invasion meltdown among non-native species) acting beyond the abiotic environment in shaping the distribution of non-native and native species, providing a more nuanced view of the key drivers underlying invasion risk and success.

  11. Z

    A stakeholder-centered determination of High-Value Data sets: the use-case...

    • data-staging.niaid.nih.gov
    Updated Oct 27, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anastasija Nikiforova (2021). A stakeholder-centered determination of High-Value Data sets: the use-case of Latvia [Dataset]. https://data-staging.niaid.nih.gov/resources?id=zenodo_5142816
    Explore at:
    Dataset updated
    Oct 27, 2021
    Dataset provided by
    University of Latvia
    Authors
    Anastasija Nikiforova
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Latvia
    Description

    The data in this dataset were collected in the result of the survey of Latvian society (2021) aimed at identifying high-value data set for Latvia, i.e. data sets that, in the view of Latvian society, could create the value for the Latvian economy and society. The survey is created for both individuals and businesses. It being made public both to act as supplementary data for "Towards enrichment of the open government data: a stakeholder-centered determination of High-Value Data sets for Latvia" paper (author: Anastasija Nikiforova, University of Latvia) and in order for other researchers to use these data in their own work.

    The survey was distributed among Latvian citizens and organisations. The structure of the survey is available in the supplementary file available (see Survey_HighValueDataSets.odt)

    Description of the data in this data set: structure of the survey and pre-defined answers (if any) 1. Have you ever used open (government) data? - {(1) yes, once; (2) yes, there has been a little experience; (3) yes, continuously, (4) no, it wasn’t needed for me; (5) no, have tried but has failed} 2. How would you assess the value of open govenment data that are currently available for your personal use or your business? - 5-point Likert scale, where 1 – any to 5 – very high 3. If you ever used the open (government) data, what was the purpose of using them? - {(1) Have not had to use; (2) to identify the situation for an object or ab event (e.g. Covid-19 current state); (3) data-driven decision-making; (4) for the enrichment of my data, i.e. by supplementing them; (5) for better understanding of decisions of the government; (6) awareness of governments’ actions (increasing transparency); (7) forecasting (e.g. trendings etc.); (8) for developing data-driven solutions that use only the open data; (9) for developing data-driven solutions, using open data as a supplement to existing data; (10) for training and education purposes; (11) for entertainment; (12) other (open-ended question) 4. What category(ies) of “high value datasets” is, in you opinion, able to create added value for society or the economy? {(1)Geospatial data; (2) Earth observation and environment; (3) Meteorological; (4) Statistics; (5) Companies and company ownership; (6) Mobility} 5. To what extent do you think the current data catalogue of Latvia’s Open data portal corresponds to the needs of data users/ consumers? - 10-point Likert scale, where 1 – no data are useful, but 10 – fully correspond, i.e. all potentially valuable datasets are available 6. Which of the current data categories in Latvia’s open data portals, in you opinion, most corresponds to the “high value dataset”? - {(1)Foreign affairs; (2) business econonmy; (3) energy; (4) citizens and society; (5) education and sport; (6) culture; (7) regions and municipalities; (8) justice, internal affairs and security; (9) transports; (10) public administration; (11) health; (12) environment; (13) agriculture, food and forestry; (14) science and technologies} 7. Which of them form your TOP-3? - {(1)Foreign affairs; (2) business econonmy; (3) energy; (4) citizens and society; (5) education and sport; (6) culture; (7) regions and municipalities; (8) justice, internal affairs and security; (9) transports; (10) public administration; (11) health; (12) environment; (13) agriculture, food and forestry; (14) science and technologies} 8. How would you assess the value of the following data categories? 8.1. sensor data - 5-point Likert scale, where 1 – not needed to 5 – highly valuable 8.2. real-time data - 5-point Likert scale, where 1 – not needed to 5 – highly valuable 8.3. geospatial data - 5-point Likert scale, where 1 – not needed to 5 – highly valuable 9. What would be these datasets? I.e. what (sub)topic could these data be associated with? - open-ended question 10. Which of the data sets currently available could be valauble and useful for society and businesses? - open-ended question 11. Which of the data sets currently NOT available in Latvia’s open data portal could, in your opinion, be valauble and useful for society and businesses? - open-ended question 12. How did you define them? - {(1)Subjective opinion; (2) experience with data; (3) filtering out the most popular datasets, i.e. basing the on public opinion; (4) other (open-ended question)} 13. How high could be the value of these data sets value for you or your business? - 5-point Likert scale, where 1 – not valuable, 5 – highly valuable 14. Do you represent any company/ organization (are you working anywhere)? (if “yes”, please, fill out the survey twice, i.e. as an individual user AND a company representative) - {yes; no; I am an individual data user; other (open-ended)} 15. What industry/ sector does your company/ organization belong to? (if you do not work at the moment, please, choose the last option) - {Information and communication services; Financial and ansurance activities; Accommodation and catering services; Education; Real estate operations; Wholesale and retail trade; repair of motor vehicles and motorcycles; transport and storage; construction; water supply; waste water; waste management and recovery; electricity, gas supple, heating and air conditioning; manufacturing industry; mining and quarrying; agriculture, forestry and fisheries professional, scientific and technical services; operation of administrative and service services; public administration and defence; compulsory social insurance; health and social care; art, entertainment and recreation; activities of households as employers;; CSO/NGO; Iam not a representative of any company 16. To which category does your company/ organization belong to in terms of its size? - {small; medium; large; self-employeed; I am not a representative of any company} 17. What is the age group that you belong to? (if you are an individual user, not a company representative) - {11..15, 16..20, 21..25, 26..30, 31..35, 36..40, 41..45, 46+, “do not want to reveal”} 18. Please, indicate your education or a scientific degree that corresponds most to you? (if you are an individual user, not a company representative) - {master degree; bachelor’s degree; Dr. and/ or PhD; student (bachelor level); student (master level); doctoral candidate; pupil; do not want to reveal these data}

    Format of the file .xls, .csv (for the first spreadsheet only), .odt

    Licenses or restrictions CC-BY

  12. d

    Cloud amount/frequency, NITRATE and other data from AIRCRAFT, USS DE...

    • catalog.data.gov
    • s.cnmilf.com
    Updated Oct 2, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (Point of Contact) (2025). Cloud amount/frequency, NITRATE and other data from AIRCRAFT, USS DE STEIGUER (AGOR 12) and ACANIA in the NE Pacific from 1983-02-10 to 1984-08-10 (NCEI Accession 8500067) [Dataset]. https://catalog.data.gov/dataset/cloud-amount-frequency-nitrate-and-other-data-from-aircraft-uss-de-steiguer-agor-12-and-acania-
    Explore at:
    Dataset updated
    Oct 2, 2025
    Dataset provided by
    (Point of Contact)
    Description

    Data has been processed by NODC to the NODC standard Bathythermograph (XBT Aircraft) (C118), Bathythermograph (XBT) (C116), Bathythermograph XBT Selected Depths (SBT) (C125), and High-Resolution CTD/STD (F022) formats. The C116/C118 format contains temperature-depth profile data obtained using expendable bathythermograph (XBT) instruments. Cruise information, position, date and time were reported for each observation. The data record was comprised of pairs of temperature-depth values. Unlike the MBT Data File, in which temperature values were recorded at uniform 5 m intervals, the XBT data files contained temperature values at non-uniform depths. These depths were recorded at the minimum number of points ("inflection points") required to accurately define the temperature curve. Standard XBTs can obtain profiles to depths of either 450 or 760 m. With special instruments, measurements can be obtained to 1830 m. Prior to July 1994, XBT data were routinely processed to one of these standard types. XBT data are now processed and loaded directly in to the NODC Ocean Profile Data Base (OPDB). Historic data from these two data types were loaded into the OPDB. The C116/C118 format contains temperature-depth profile data obtained using expendable bathythermograph (XBT) instruments. Cruise information, position, date and time were reported for each observation. The data record was comprised of pairs of temperature-depth values. Unlike the MBT Data File, in which temperature values were recorded at uniform 5 m intervals, the XBT data files contained temperature values at non-uniform depths. These depths were recorded at the minimum number of points ("inflection points") required to accurately define the temperature curve. Standard XBTs can obtain profiles to depths of either 450 or 760 m. With special instruments, measurements can be obtained to 1830 m. Prior to July 1994, XBT data were routinely processed to one of these standard types. XBT data are now processed and loaded directly in to the NODC Ocean Profile Data Base (OPDB). Historic data from these two data types were loaded into the OPDB. The UBT (C125) format contains temperature-depth profile data obtained using expendable bathythermograph (XBT) instruments. Cruise information, position, date and time were reported for each observation. The data records are comprised of pairs of temperature-depth values. Depths are selected by the originator - usually at standard horizons or some fixed interval. Standard XBTs can obtain profiles to depths of either 450 or 760 m. Special instruments permitted measurements to be obtained to 1830 m. The F022 format contains high-resolution data collected using CTD (conductivity-temperature-depth) and STD (salinity-temperature-depth) instruments. As they are lowered and raised in the oceans, these electronic devices provide nearly continuous profiles of temperature, salinity, and other parameters. Data values may be subject to averaging or filtering or obtained by interpolation and may be reported at depth intervals as fine as 1m. Cruise and instrument information, position, date, time and sampling interval are reported for each station. Environmental data at the time of the cast (meteorological and sea surface conditions) may also be reported. The data record comprises values of temperature, salinity or conductivity, density (computed sigma-t), and possibly dissolved oxygen or transmissivity at specified depth or pressure levels. Data may be reported at either equally or unequally spaced depth or pressure intervals. A text record is available for comments.

  13. BOLD5000

    • openneuro.org
    Updated Sep 14, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nadine Chang; John A. Pyles; Abhinav Gupta; Michael J. Tarr; Elissa M. Aminoff (2018). BOLD5000 [Dataset]. http://doi.org/10.18112/openneuro.ds001499.v1.1.1
    Explore at:
    Dataset updated
    Sep 14, 2018
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Nadine Chang; John A. Pyles; Abhinav Gupta; Michael J. Tarr; Elissa M. Aminoff
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    BOLD5000: Brains, Objects, Landscapes Dataset

    For details please refer to BOLD5000.org and our paper on arXiv (http://arxiv.org/abs/1809.01281)

    Participant Directories Content 1) Four participants: CSI1, CSI2, CSI3, & CSI4 2) Functional task data acquisition sessions: sessions #1-15 Each functional session includes: -3 sets of fieldmaps (EPI opposite phase encoding; spin-echo opposite phase encoding pairs with partial & non-partial Fourier) -9 or 10 functional scans of slow event-related 5000 scene data (5000scenes) -1 or 0 functional localizer scans used to define scene selective regions (localizer) -each event.json file lists each stimulus, the onset time, and the participant’s response (participants performed a simple valence task) 3) Anatomical data acquisition session: #16 Anatomical Data: T1 weighted MPRAGE scan, a T2 weighted SPACE, diffusion spectrum imaging

    Notes: -All MRI and fMRI data provided is with Siemens pre-scan normalization filter.
    -CSI4 only participated in 10 MRI sessions: 1-9 were functional acquisition sessions, and 10 was the anatomical data acquisition session.

    Derivatives Directory Content 1) fMRIprep: -Preprocessed data for all functional data of CSI1 through CSI4 (listed in folders for each participant: derivatives/fmriprep/sub-CSIX). Data was preprocessed both in T1w image space and on surface space. Functional data was motion corrected, susceptibility distortion corrected, and aligned to the anatomical data using bbregister. Please refer to the paper for the details on preprocessing. -Reports resulting from fMRI prep, which include the success of anatomical alignment and distortion correction, among other measures of preprocessing success are all listed in the sub-CSIX.html files.
    2) Freesurfer: Freesurfer reconstructions as a result of fMRIprep preprocessing stream. 3) MRIQC: Image quality metrics (IQMs) of the dataset using MRIQC. -CSIX-func.csv files are text files with a list of all IQMs for each session, for each run. -CSIX-anat.csv files are text files with a list of all IQMs for the scans acquired in the anatomical session (e.g., MPRAGE). -CSIX_IQM.xls an excel workbook, each sheet of workbook lists the IQMs for a single run. This is the same data as CSIX-func.csv, except formatted differently. -sub-CSIX/derivatives: contain .json with the MRIQC/IQM results for each run. -sub-CSIX/reports: contains .html file with MRIQC/IQM results for each run along with mean signal and standard deviation maps. 4)spm: A directory that contains the masks used to define each region of interest (ROI) in each participant. There were 10 ROIs: early visual (EarlyVis), lateral occipital cortex (LOC), occipital place area (OPA), parahippocampal place area (PPA), retrosplenial complex (RSC) for the left hemisphere (LH) and right hemisphere (RH).

  14. d

    Data from: Defining signal thresholds in DNA microarrays: exemplary...

    • catalog.data.gov
    • odgavaprod.ogopendata.com
    • +2more
    Updated Sep 6, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Institutes of Health (2025). Defining signal thresholds in DNA microarrays: exemplary application for invasive cancer [Dataset]. https://catalog.data.gov/dataset/defining-signal-thresholds-in-dna-microarrays-exemplary-application-for-invasive-cancer
    Explore at:
    Dataset updated
    Sep 6, 2025
    Dataset provided by
    National Institutes of Health
    Description

    Background Genome-wide or application-targeted microarrays containing a subset of genes of interest have become widely used as a research tool with the prospect of diagnostic application. Intrinsic variability of microarray measurements poses a major problem in defining signal thresholds for absent/present or differentially expressed genes. Most strategies have used fold-change threshold values, but variability at low signal intensities may invalidate this approach and it does not provide information about false-positives and false negatives. Results We introduce a method to filter false-positives and false-negatives from DNA microarray experiments. This is achieved by evaluating a set of positive and negative controls by receiver operating characteristic (ROC) analysis. As an advantage of this approach, users may define thresholds on the basis of sensitivity and specificity considerations. The area under the ROC curve allows quality control of microarray hybridizations. This method has been applied to custom made microarrays developed for the analysis of invasive melanoma derived tumor cells. It demonstrated that ROC analysis yields a threshold with reduced missclassified genes in microarray experiments. Conclusions Provided that a set of appropriate positive and negative controls is included on the microarray, ROC analysis obviates the inherent problem of arbitrarily selecting threshold levels in microarray experiments. The proposed method is applicable to both custom made and commercially available DNA microarrays and will help to improve the reliability of predictions from DNA microarray experiments.

  15. p

    Bangladesh Number Dataset

    • listtodata.com
    .csv, .xls, .txt
    Updated Jul 17, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    List to Data (2025). Bangladesh Number Dataset [Dataset]. https://listtodata.com/bangladesh-dataset
    Explore at:
    .csv, .xls, .txtAvailable download formats
    Dataset updated
    Jul 17, 2025
    Authors
    List to Data
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Time period covered
    Jan 1, 2025 - Dec 31, 2025
    Area covered
    Bangladesh
    Variables measured
    phone numbers, Email Address, full name, Address, City, State, gender,age,income,ip address,
    Description

    Bangladesh number dataset provides contact information from trusted sources. We only collect phone numbers from reliable sources and define this information. To ensure transparency, we also provide the source URL to show where the information was collected from. In addition, we offer 24/7 support. If you have a question or need help, we’re always here. However, we care about accuracy, so we carefully collect the Bangladesh number dataset from trusted sources. You may rely on this data for business or personal use. With customer support, you’ll never have to wait when you need help or more information. We use opt-in data to respect privacy. This way, we contact only people who want to hear from you. Bangladesh phone data gives you access to contacts in Bangladesh. Here you can filter information by gender, age, and relationship status. This makes it easy to find exactly the people you want to connect with. We define this data by ensuring it follows all GDPR rules to keep it safe and legal. Our system works hard to remove any invalid data so you get only accurate and valid numbers. List to Data is a helpful website for finding important phone numbers quickly. Also, our Bangladesh phone data is suitable for doing business targeting specific groups. You can easily filter your list to focus on specific types of customers. Since we remove invalid data regularly, you don’t have to deal with old or useless numbers. We assure you that all data follows strict GDPR rules, so you can use it without any problems. Bangladesh phone number list is a collection of phone numbers from people in Bangladesh. We define this list by providing 100% correct and valid phone numbers that are ready to use. Also, we offer a replacement guarantee if you ever receive an invalid number. This means you will always have accurate data. We collect phone numbers that we provide based on customer’s permission. Moreover, we work hard to provide the best Bangladesh phone number list for businesses and personal use. We gather data correctly, so you won’t have to worry about getting outdated or incorrect information. Our replacement guarantee means you’ll always have valid numbers, so you can relax and feel confident.

  16. Z

    Data from: Classification of web-based Digital Humanities projects...

    • data-staging.niaid.nih.gov
    • zenodo.org
    Updated Nov 28, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Battisti, Tommaso (2024). Classification of web-based Digital Humanities projects leveraging information visualisation techniques [Dataset]. https://data-staging.niaid.nih.gov/resources?id=zenodo_14192757
    Explore at:
    Dataset updated
    Nov 28, 2024
    Dataset provided by
    University of Bologna
    Authors
    Battisti, Tommaso
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Description

    This dataset contains a list of 186 Digital Humanities projects leveraging information visualisation methods. Each project has been classified according to visualisation and interaction techniques, narrativity and narrative solutions, domain, methods for the representation of uncertainty and interpretation, and the employment of critical and custom approaches to visually represent humanities data.

    Classification schema: categories and columns

    The project_id column contains unique internal identifiers assigned to each project. Meanwhile, the last_access column records the most recent date (in DD/MM/YYYY format) on which each project was reviewed based on the web address specified in the url column.The remaining columns can be grouped into descriptive categories aimed at characterising projects according to different aspects:

    Narrativity. It reports the presence of narratives employing information visualisation techniques. Here, the term narrative encompasses both author-driven linear data stories and more user-directed experiences where the narrative sequence is composed of user exploration [1]. We define 2 columns to identify projects using visualisation techniques in narrative, or non-narrative sections. Both conditions can be true for projects employing visualisations in both contexts. Columns:

    non_narrative (boolean)

    narrative (boolean)

    Domain. The humanities domain to which the project is related. We rely on [2] and the chapters of the first part of [3] to abstract a set of general domains. Column:

    domain (categorical):

    History and archaeology

    Art and art history

    Language and literature

    Music and musicology

    Multimedia and performing arts

    Philosophy and religion

    Other: both extra-list domains and cases of collections without a unique or specific thematic focus.

    Visualisation of uncertainty and interpretation. Buiding upon the frameworks proposed by [4] and [5], a set of categories was identified, highlighting a distinction between precise and impressional communication of uncertainty. Precise methods explicitly represent quantifiable uncertainty such as missing, unknown, or uncertain data, precisely locating and categorising it using visual variables and positioning. Two sub-categories are interactive distinction, when uncertain data is not visually distinguishable from the rest of the data but can be dynamically isolated or included/excluded categorically through interaction techniques (usually filters); and visual distinction, when uncertainty visually “emerges” from the representation by means of dedicated glyphs and spatial or visual cues and variables. On the other hand, impressional methods communicate the constructed and situated nature of data [6], exposing the interpretative layer of the visualisation and indicating more abstract and unquantifiable uncertainty using graphical aids or interpretative metrics. Two sub-categories are: ambiguation, when the use of graphical expedients—like permeable glyph boundaries or broken lines—visually convey the ambiguity of a phenomenon; and interpretative metrics, when expressive, non-scientific, or non-punctual metrics are used to build a visualisation. Column:

    uncertainty_interpretation (categorical):

    Interactive distinction

    Visual distinction

    Ambiguation

    Interpretative metrics

    Critical adaptation. We identify projects in which, for what concerns at least a visualisation, the following criteria are fulfilled: 1) avoid uncritical repurposing of prepackaged, generic-use, or ready-made solutions; 2) being tailored and unique to reflect the peculiarities of the phenomena at hand; 3) avoid extreme simplifications to embraces and depict complexity promoting time-spending visualisation-based inquiry. Column:

    critical_adaptation (boolean)

    Non-temporal visualisation techniques. We adopt and partially adapt the terminology and definitions from [7]. A column is defined for each type of visualisation and accounts for its presence within a project, also including stacked layouts and more complex variations. Columns and inclusion criteria:

    plot (boolean): visual representations that map data points onto a two-dimensional coordinate system.

    cluster_or_set (bool): sets or cluster-based visualisations used to unveil possible inter-object similarities.

    map (boolean): geographical maps used to show spatial insights. While we do not specify the variants of maps (e.g., pin maps, dot density maps, flow maps, etc.), we make an exception for maps where each data point is represented by another visualisation (e.g., a map where each data point is a pie chart) by accounting for the presence of both in their respective columns.

    network (boolean): visual representations highlighting relational aspects through nodes connected by links or edges.

    hierarchical_diagram (boolean): tree-like structures such as tree diagrams, radial trees, but also dendrograms. They differ from networks for their strictly hierarchical structure and absence of closed connection loops.

    treemap (boolean): still hierarchical, but highlighting quantities expressed by means of area size. It also includes circle packing variants.

    word_cloud (boolean): clouds of words, where each instance’s size is proportional to its frequency in a related context

    bars (boolean): includes bar charts, histograms, and variants. It coincides with “bar charts” in [7] but with a more generic term to refer to all bar-based visualisations.

    line_chart (boolean): the display of information as sequential data points connected by straight-line segments.

    area_chart (boolean): similar to a line chart but with a filled area below the segments. It also includes density plots.

    pie_chart (boolean): circular graphs divided into slices which can also use multi-level solutions.

    plot_3d (boolean): plots that use a third dimension to encode an additional variable.

    proportional_area (boolean): representations used to compare values through area size. Typically, using circle- or square-like shapes.

    other (boolean): it includes all other types of non-temporal visualisations that do not fall into the aforementioned categories.

    Temporal visualisations and encodings. In addition to non-temporal visualisations, a group of techniques to encode temporality is considered in order to enable comparisons with [7]. Columns:

    timeline (boolean): the display of a list of data points or spans in chronological order. They include timelines working either with a scale or simply displaying events in sequence. As in [7], we also include structured solutions resembling Gantt chart layouts.

    temporal_dimension (boolean): to report when time is mapped to any dimension of a visualisation, with the exclusion of timelines. We use the term “dimension” and not “axis” as in [7] as more appropriate for radial layouts or more complex representational choices.

    animation (boolean): temporality is perceived through an animation changing the visualisation according to time flow.

    visual_variable (boolean): another visual encoding strategy is used to represent any temporality-related variable (e.g., colour).

    Interaction techniques. A set of categories to assess affordable interaction techniques based on the concept of user intent [8] and user-allowed data actions [9]. The following categories roughly match the “processing”, “mapping”, and “presentation” actions from [9] and the manipulative subset of methods of the “how” an interaction is performed in the conception of [10]. Only interactions that affect the visual representation or the aspect of data points, symbols, and glyphs are taken into consideration. Columns:

    basic_selection (boolean): the demarcation of an element either for the duration of the interaction or more permanently until the occurrence of another selection.

    advanced_selection (boolean): the demarcation involves both the selected element and connected elements within the visualisation or leads to brush and link effects across views. Basic selection is tacitly implied.

    navigation (boolean): interactions that allow moving, zooming, panning, rotating, and scrolling the view but only when applied to the visualisation and not to the web page. It also includes “drill” interactions (to navigate through different levels or portions of data detail, often generating a new view that replaces or accompanies the original) and “expand” interactions generating new perspectives on data by expanding and collapsing nodes.

    arrangement (boolean): methods to organise visualisation elements (symbols, glyphs, etc.) or multi-visualisation layouts spatially through drag and drop or according to a criterion via more automatic triggers.

    change (boolean): visual encoding alterations involving different aspects of visualisation as a whole: the same content is presented with another visualisation technique; the change involves symbols or glyphs aspect (colour, size, shape, etc.); the visualisation type is unaltered, but the layout variant changes (e.g., to stacked layouts); or other changes like axes inversion and scale modifications. The presence of all the visualisation techniques involved in a change is reported.

    visualisation_filter (boolean): filters to exclude or include visualisation elements with respect to defined criteria, without reloading or generating a new visualisation. Unlike options triggering the fetch of new data to alter the visualisation content, filters seamlessly operate on existing visual elements.

    collection_filter (boolean): the interaction with visualised elements acts as a filter for a related collection or list of items (e.g., clicking a region on a map filters a list of items according to spatial metadata).

    aggregation (boolean): changes to the granularity of visual elements according to a

  17. p

    Iran Number Dataset

    • listtodata.com
    • st.listtodata.com
    .csv, .xls, .txt
    Updated Jul 17, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    List to Data (2025). Iran Number Dataset [Dataset]. https://listtodata.com/iran-dataset
    Explore at:
    .csv, .xls, .txtAvailable download formats
    Dataset updated
    Jul 17, 2025
    Authors
    List to Data
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Time period covered
    Jan 1, 2025 - Dec 31, 2025
    Area covered
    Iran
    Variables measured
    phone numbers, Email Address, full name, Address, City, State, gender,age,income,ip address,
    Description

    Iran number dataset allows you to filter phone numbers by gender, age, and relationship status. You can easily find the contacts you need with this helpful feature. We create this list to give you the best data for your search. Additionally, we remove invalid data regularly to keep the list fresh and accurate. This method keeps your info fresh, giving you the latest details each time. Moreover, using the Iran number dataset helps you find the exact contacts you need. You can filter numbers based on gender, age, or relationship status, making it simple to target your audience. We follow GDPR rules to respect everyone’s privacy. Plus, we remove invalid data and provide updates. Therefore, you always have access to the latest, reliable contact details. Iran phone data contains 100% correct and valid phone numbers. We define this list to ensure all numbers are checked. Thus, they can be used easily. If a number doesn’t work, we offer a replacement guarantee. This means we will replace any invalid numbers with correct ones at no extra cost. List to Data is a helpful website for finding important phone numbers quickly. Additionally, the Iran phone data gives you reliable contact information. Every number is verified to ensure it’s valid. If you find any invalid numbers, our replacement guarantee covers them. We make sure you only get the correct data. By collecting numbers with customer permission, we respect privacy and ensure fair information sharing. Iran phone number list helps you find contact numbers easily. This list includes phone numbers collected from trusted sources. We define this list to ensure you have accurate and reliable data. You can see the source websites to find out where we got the info. This transparency builds trust, so you feel confident using the list. Also, we offer 24/7 support to help you whenever you need it. Also, the data comes from opt-in sources, meaning people agreed to be contacted. Moreover, the Iran phone number list makes it easy to connect with people. You can trust the data because we collect it from reliable sources and verify it using the source URLs provided. Our customer support makes sure you can get answers anytime you need help.

  18. w

    Granular bed filter development program, Phase II. Quarterly report,...

    • data.wu.ac.at
    html
    Updated Sep 29, 2016
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2016). Granular bed filter development program, Phase II. Quarterly report, September-December 1980 [Dataset]. https://data.wu.ac.at/odso/edx_netl_doe_gov/ZjIzMDAxM2QtYzM4MC00ZDllLWE4NWQtZGU5ZWY2Mzg4OWJk
    Explore at:
    htmlAvailable download formats
    Dataset updated
    Sep 29, 2016
    Description

    The Department of Energy is sponsoring a multiphase program to investigate the filtration potential of the moving granular bed filter (GBF) for application in pressurized high temperature energy conversion systems. The completed Phase I, included the development of a mathematical model, a cold flow parametric test series in a 0.746 m/sub 0//sup 3//s GBF, and investigations of potential dust plugging problems at the inlet screen. During the experimental program, collecting efficiencies of 99% and filter outlet loadings less than 0.0074 g/m/sub 0//sup 3/ were demonstrated. The objectives of Phase II are to investigate the effects of elevated temperature and coal combustion particulate on GBF filtration performance; to update the analytical model developed in Phase I to reflect high temperature effects; to optimize filter internal configuration; to demonstrate long duration GBF performance relative to corrosion, deposition, erosion, filtration efficiency, reliability and controllability. Hot flow testing to date has confirmed that the GBF configured with inlet and outlet screens has exhibited a tendency for extensive and irreversible ash plugging. As an alternative, the potential advantages produced by a screenless configuration, having higher filtration efficiency, have been achieved during both cold flow and hot flow tests as previously reported. The continuation of experimental work pertinent to the development and design improvement of the GBF system is described and specifically addresses: an experimental study of granular flow coupled with countercurrent gas flow to define the flow and velocity profiles of moving filter media; and ambient parametric testing of the screenless granular bed filter in the full scale cold flow model to determine bounds for its operation and to provide a data base from which comparisons can be made with a mathematical model describing GBF performance.

  19. p

    Bahamas Number Dataset

    • listtodata.com
    .csv, .xls, .txt
    Updated Jul 17, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    List to Data (2025). Bahamas Number Dataset [Dataset]. https://listtodata.com/bahamas-dataset
    Explore at:
    .csv, .xls, .txtAvailable download formats
    Dataset updated
    Jul 17, 2025
    Authors
    List to Data
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Time period covered
    Jan 1, 2025 - Dec 31, 2025
    Area covered
    The Bahamas
    Variables measured
    phone numbers, Email Address, full name, Address, City, State, gender,age,income,ip address,
    Description

    Bahamas number dataset provides contact information from trusted sources. We clarify this data by collecting phone numbers that come from reliable sources only. To ensure clearness, we provide source URLs. This shows where the data is gathered from. In addition, we offer 24/7 support. If you have any questions or need help, our team is always here. With List to Data, you can find phone numbers from different countries. However, we care about accuracy, so we collect the Bahamas number dataset carefully from trusted sources. So, you can rely on this data for business or personal use. With customer support, you never have to wait for help or more information. We also use opt-in data to respect privacy. This ensures you contact people who want to hear from you. Bahamas phone data gives you access to contacts in Bahamas. Also, you can filter the information by gender, age, and relationship status. However, this makes it easy to find exactly the people you want to connect with. We define this data by ensuring it follows all GDPR rules to keep it safe and legal. Our team works hard to remove invalid data. This way, you only get correct, useful numbers. In addition, our Bahamas phone data is perfect for businesses looking to target specific groups. Hence, you can easily filter your list to focus on certain types of customers. Besides, we remove invalid data regularly, so you will not have to deal with useless numbers. With regular updates, your phone data will always be ready when you need it. Bahamas phone number list is a collection of phone numbers from people in the Bahamas. We define this list by providing 95% correct and valid phone numbers that are ready to use. Also, we offer a replacement guarantee if you ever receive an invalid number. As a result, you will always have accurate data. We collect the phone numbers we provide based on customer permission. Moreover, we work hard to provide the best Bahamas phone number list for businesses and personal use. Also, we focus on gathering data correctly, so you don’t have to worry about getting incorrect information. Our replacement guarantee gives you peace of mind, knowing that you will always have valid numbers.

  20. d

    Audience Targeting Data | 330M+ Global Devices | Audience Data & Advertising...

    • datarade.ai
    .json, .csv
    Updated Feb 4, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    DRAKO (2025). Audience Targeting Data | 330M+ Global Devices | Audience Data & Advertising | API Delivery [Dataset]. https://datarade.ai/data-products/audience-targeting-data-330m-global-devices-audience-dat-drako
    Explore at:
    .json, .csvAvailable download formats
    Dataset updated
    Feb 4, 2025
    Dataset authored and provided by
    DRAKO
    Area covered
    Czech Republic, Armenia, Namibia, Curaçao, Russian Federation, Equatorial Guinea, Serbia, Eritrea, Suriname, San Marino
    Description

    DRAKO is a Mobile Location Audience Targeting provider with a programmatic trading desk specialising in geolocation analytics and programmatic advertising. Through our customised approach, we offer business and consumer insights as well as addressable audiences for advertising.

    Mobile Location Data can be meaningfully transformed into Audience Targeting when used in conjunction with other dataset. Our expansive POI Data allows us to segment users by visitation to major brands and retailers as well as categorizes them into syndicated segments. Beyond POI visits, our proprietary Home Location Model determines residents of geographic areas such as Designated Market Areas, Counties, or States. Relatedly, our Home Location Model also fuels our Geodemographic Census Data segments as we are able to determine residents of the smallest census units. Additionally, we also have audiences of: ticketed event and venue visitors; survey data; and retail data.

    All of our Audience Targeting is 100% deterministic in that it only includes high-quality, real visits to locations as defined by a POIs satellite imagery buildings contour. We never use a radius when building an audience unless requested. We have a horizontal accuracy of 5m.

    Additionally, we can always cross reference your audience targeting with our syndicated segments:

    Overview of our Syndicated Audience Data Segments: - Brand/POI segments (specific named stores and locations) - Categories (behavioural segments - revealed habits) - Census demographic segments (HH income, race, religion, age, family structure, language, etc.,) - Events segments (ticketed live events, conferences, and seminars) - Resident segments (State/province, CMAs, DMAs, city, county, sub-county) - Political segments (Canadian Federal and Provincial, US Congressional Upper and Lower House, US States, City elections, etc.,) - Survey Data (Psychosocial/Demographic survey data) - Retail Data (Receipt/transaction data)

    All of our syndicated segments are customizable. That means you can limit them to people within a certain geography, remove employees, include only the most frequent visitors, define your own custom lookback, or extend our audiences using our Home, Work, and Social Extensions.

    In addition to our syndicated segments, we’re also able to run custom queries return to you all the Mobile Ad IDs (MAIDs) seen at in a specific location (address; latitude and longitude; or WKT84 Polygon) or in your defined geographic area of interest (political districts, DMAs, Zip Codes, etc.,)

    Beyond just returning all the MAIDs seen within a geofence, we are also able to offer additional customizable advantages: - Average precision between 5 and 15 meters - CRM list activation + extension - Extend beyond Mobile Location Data (MAIDs) with our device graph - Filter by frequency of visitations - Home and Work targeting (retrieve only employees or residents of an address) - Home extensions (devices that reside in the same dwelling from your seed geofence) - Rooftop level address geofencing precision (no radius used EVER unless user specified) - Social extensions (devices in the same social circle as users in your seed geofence) - Turn analytics into addressable audiences - Work extensions (coworkers of users in your seed geofence)

    Data Compliance: All of our Audience Targeting Data is fully CCPA compliant and 100% sourced from SDKs (Software Development Kits), the most reliable and consistent mobile data stream with end user consent available with only a 4-5 day delay. This means that our location and device ID data comes from partnerships with over 1,500+ mobile apps. This data comes with an associated location which is how we are able to segment using geofences.

    Data Quality: In addition to partnering with trusted SDKs, DRAKO has additional screening methods to ensure that our mobile location data is consistent and reliable. This includes data harmonization and quality scoring from all of our partners in order to disregard MAIDs with a low quality score.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
esri_en (2014). Filter (Mature) [Dataset]. https://data-salemva.opendata.arcgis.com/items/1bdcdf930b4345dfb4db10f795e0c726
Organization logo

Filter (Mature)

Explore at:
97 scholarly articles cite this dataset (View in Google Scholar)
Dataset updated
Jul 3, 2014
Dataset provided by
Esrihttp://esri.com/
Authors
esri_en
Description

Filter is a configurable app template that displays a map with an interactive filtered view of one or more feature layers. The application displays prompts and hints for attribute filter values which are used to locate specific features.Use CasesFilter displays an interactive dialog box for exploring the distribution of a single attribute or the relationship between different attributes. This is a good choice when you want to understand the distribution of different types of features within a layer, or create an experience where you can gain deeper insight into how the interaction of different variables affect the resulting map content.Configurable OptionsFilter can present a web map and be configured with the following options:Choose the web map used in the application.Provide a title and color theme. The default title is the web map name.Configure the ability for feature and location search.Define the filter experince and provide text to encourage user exploration of data by displaying additional values to choose as the filter text.Supported DevicesThis application is responsively designed to support use in browsers on desktops, mobile phones, and tablets.Data RequirementsRequires at least one layer with an interactive filter. See Apply Filters help topic for more details.Get Started This application can be created in the following ways:Click the Create a Web App button on this pageShare a map and choose to Create a Web AppOn the Content page, click Create - App - From Template Click the Download button to access the source code. Do this if you want to host the app on your own server and optionally customize it to add features or change styling.

Search
Clear search
Close search
Google apps
Main menu