12 datasets found
  1. a

    Chart Viewer

    • city-of-lawrenceville-arcgis-hub-lville.hub.arcgis.com
    Updated Sep 22, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    esri_en (2021). Chart Viewer [Dataset]. https://city-of-lawrenceville-arcgis-hub-lville.hub.arcgis.com/items/be4582b38d764de0a970b986c824acde
    Explore at:
    Dataset updated
    Sep 22, 2021
    Dataset authored and provided by
    esri_en
    Description

    Use the Chart Viewer template to display bar charts, line charts, pie charts, histograms, and scatterplots to complement a map. Include multiple charts to view with a map or side by side with other charts for comparison. Up to three charts can be viewed side by side or stacked, but you can access and view all the charts that are authored in the map. Examples: Present a bar chart representing average property value by county for a given area. Compare charts based on multiple population statistics in your dataset. Display an interactive scatterplot based on two values in your dataset along with an essential set of map exploration tools. Data requirements The Chart Viewer template requires a map with at least one chart configured. Key app capabilities Multiple layout options - Choose Stack to display charts stacked with the map, or choose Side by side to display charts side by side with the map. Manage chart - Reorder, rename, or turn charts on and off in the app. Multiselect chart - Compare two charts in the panel at the same time. Bookmarks - Allow users to zoom and pan to a collection of preset extents that are saved in the map. Home, Zoom controls, Legend, Layer List, Search Supportability This web app is designed responsively to be used in browsers on desktops, mobile phones, and tablets. We are committed to ongoing efforts towards making our apps as accessible as possible. Please feel free to leave a comment on how we can improve the accessibility of our apps for those who use assistive technologies.

  2. f

    UC_vs_US Statistic Analysis.xlsx

    • figshare.com
    xlsx
    Updated Jul 9, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    F. (Fabiano) Dalpiaz (2020). UC_vs_US Statistic Analysis.xlsx [Dataset]. http://doi.org/10.23644/uu.12631628.v1
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Jul 9, 2020
    Dataset provided by
    Utrecht University
    Authors
    F. (Fabiano) Dalpiaz
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Sheet 1 (Raw-Data): The raw data of the study is provided, presenting the tagging results for the used measures described in the paper. For each subject, it includes multiple columns: A. a sequential student ID B an ID that defines a random group label and the notation C. the used notation: user Story or use Cases D. the case they were assigned to: IFA, Sim, or Hos E. the subject's exam grade (total points out of 100). Empty cells mean that the subject did not take the first exam F. a categorical representation of the grade L/M/H, where H is greater or equal to 80, M is between 65 included and 80 excluded, L otherwise G. the total number of classes in the student's conceptual model H. the total number of relationships in the student's conceptual model I. the total number of classes in the expert's conceptual model J. the total number of relationships in the expert's conceptual model K-O. the total number of encountered situations of alignment, wrong representation, system-oriented, omitted, missing (see tagging scheme below) P. the researchers' judgement on how well the derivation process explanation was explained by the student: well explained (a systematic mapping that can be easily reproduced), partially explained (vague indication of the mapping ), or not present.

    Tagging scheme:
    Aligned (AL) - A concept is represented as a class in both models, either
    

    with the same name or using synonyms or clearly linkable names; Wrongly represented (WR) - A class in the domain expert model is incorrectly represented in the student model, either (i) via an attribute, method, or relationship rather than class, or (ii) using a generic term (e.g., user'' instead ofurban planner''); System-oriented (SO) - A class in CM-Stud that denotes a technical implementation aspect, e.g., access control. Classes that represent legacy system or the system under design (portal, simulator) are legitimate; Omitted (OM) - A class in CM-Expert that does not appear in any way in CM-Stud; Missing (MI) - A class in CM-Stud that does not appear in any way in CM-Expert.

    All the calculations and information provided in the following sheets
    

    originate from that raw data.

    Sheet 2 (Descriptive-Stats): Shows a summary of statistics from the data collection,
    

    including the number of subjects per case, per notation, per process derivation rigor category, and per exam grade category.

    Sheet 3 (Size-Ratio):
    

    The number of classes within the student model divided by the number of classes within the expert model is calculated (describing the size ratio). We provide box plots to allow a visual comparison of the shape of the distribution, its central value, and its variability for each group (by case, notation, process, and exam grade) . The primary focus in this study is on the number of classes. However, we also provided the size ratio for the number of relationships between student and expert model.

    Sheet 4 (Overall):
    

    Provides an overview of all subjects regarding the encountered situations, completeness, and correctness, respectively. Correctness is defined as the ratio of classes in a student model that is fully aligned with the classes in the corresponding expert model. It is calculated by dividing the number of aligned concepts (AL) by the sum of the number of aligned concepts (AL), omitted concepts (OM), system-oriented concepts (SO), and wrong representations (WR). Completeness on the other hand, is defined as the ratio of classes in a student model that are correctly or incorrectly represented over the number of classes in the expert model. Completeness is calculated by dividing the sum of aligned concepts (AL) and wrong representations (WR) by the sum of the number of aligned concepts (AL), wrong representations (WR) and omitted concepts (OM). The overview is complemented with general diverging stacked bar charts that illustrate correctness and completeness.

    For sheet 4 as well as for the following four sheets, diverging stacked bar
    

    charts are provided to visualize the effect of each of the independent and mediated variables. The charts are based on the relative numbers of encountered situations for each student. In addition, a "Buffer" is calculated witch solely serves the purpose of constructing the diverging stacked bar charts in Excel. Finally, at the bottom of each sheet, the significance (T-test) and effect size (Hedges' g) for both completeness and correctness are provided. Hedges' g was calculated with an online tool: https://www.psychometrica.de/effect_size.html. The independent and moderating variables can be found as follows:

    Sheet 5 (By-Notation):
    

    Model correctness and model completeness is compared by notation - UC, US.

    Sheet 6 (By-Case):
    

    Model correctness and model completeness is compared by case - SIM, HOS, IFA.

    Sheet 7 (By-Process):
    

    Model correctness and model completeness is compared by how well the derivation process is explained - well explained, partially explained, not present.

    Sheet 8 (By-Grade):
    

    Model correctness and model completeness is compared by the exam grades, converted to categorical values High, Low , and Medium.

  3. Data from: A fine-grained dataset of visualisation and interaction practices...

    • zenodo.org
    csv, tsv
    Updated Feb 24, 2026
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tommaso Battisti; Tommaso Battisti (2026). A fine-grained dataset of visualisation and interaction practices in web-based Digital Humanities projects [Dataset]. http://doi.org/10.5281/zenodo.18710667
    Explore at:
    tsv, csvAvailable download formats
    Dataset updated
    Feb 24, 2026
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Tommaso Battisti; Tommaso Battisti
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Description

    This dataset classifies 587 visualisation–interaction units extracted from 186 web-based Digital Humanities projects, previously classified in a related dataset https://doi.org/10.5281/zenodo.14192758" target="_blank" rel="noopener">[1], allowing cross-references between them. Each row represents a distinct combination of visualisation technique(s) (e.g., map, bar chart, network) and associated interactive features within a project. The dataset provides a finer-grained view of design choices, documenting how visualisations and interactive possibilities are implemented, including their connection to narrative or non-narrative contexts, temporal encodings, and multi-view or reconfiguration strategies.

    The building blocks of our dataset: defining visualisation–interaction units

    A visualisation–interaction unit is a distinct configuration combining a visualisation technique (or multiple techniques when linked through coordinated views) with a specific set of interactive features. Following [2], we consider these elements as working interdependently to achieve a shared data-related goal.
    These units form the basic level of analysis in our dataset, with each row representing one unit. Units are distinguished not only by their visualisation technique and affordable interaction, but also by their temporal characteristics and narrative context. Temporal encodings—such as time axes, animated transitions, or other time-based variables—define a new unit even if the visualisation and interaction remain unchanged. Similarly, an identical configuration appearing in both a narrative and a non-narrative context counts as two separate units, reflecting their differing intent and function.
    Accordingly, the number of units in a project does not directly correspond to the number of visualisations it contains. Two otherwise identical charts are treated as distinct units if they differ in interactive features, temporal encoding, or narrative context, while exact duplicates without variation are counted as a single unit. For example, if a project contains five bar charts that all support drill-down on click, they are counted as a single unit. Conversely, if the same five bar charts each offer different interactive capabilities, they are treated as separate units based on their unique visualisation–interaction combinations.
    Every unit includes at least one visualisation technique, although interaction may or may not be present (for instance, in a static chart).

    Classification schema: categories and columns

    Identifiers. Three columns in the dataset are dedicated to uniquely identifying units and their relationships within projects:

    • project_id: the identifier of the project to which the unit belongs. This reuses the same incremental IDs from https://doi.org/10.5281/zenodo.14192758" target="_blank" rel="noopener">[1] to enable cross-referencing between datasets.

    • vis_unit_id: the identifier of the individual visualisation–interaction unit. IDs increment within each project and reset to 1 for a new project.

    • visualisation_version: an identifier used to track interactive transformations of visualisations. Multiple rows can share the same project_id and vis_unit_id if they represent different states of the same view, triggered by user interactions that modify the visual form.

    Narrativity. We record whether a visualisation–interaction unit is employed within a narrative context. Some projects contain units exclusively in narrative or non-narrative settings, while others include units in both. The relevant columns are:

    • non_narrative: a boolean value indicating whether the unit appears in non-narrative contexts.

    • narrative: a boolean value indicating whether the unit is used in narrative contexts (including both strongly guided, author-driven data stories and more interactive, user-driven narratives).

    Visualisation techniques. We adopt, and where necessary adapt, the terminology and definitions from [3]. Each column corresponds to a specific type of visualisation and indicates (by means of a boolean value) whether that visualisation technique is present in a given visualisation–interaction unit. The following columns and inclusion criteria are used to encode this information:

    • plot: visual representations that map data points onto a two-dimensional coordinate system.

    • cluster_or_set: sets or cluster-based visualisations used to unveil possible inter-object similarities.

    • map: geographical maps used to show spatial insights. While we do not specify the variants of maps (e.g., pin maps, dot density maps, flow maps, etc.), we make an exception for maps where each data point is represented by another visualisation (e.g., a map where each data point is a pie chart) by accounting for the presence of both in their respective columns.

    • network: visual representations highlighting relational aspects through nodes connected by links or edges.

    • hierarchical_diagram: tree-like structures such as tree diagrams, radial trees, but also dendrograms. They differ from networks for their strictly hierarchical structure and absence of closed connection loops.

    • treemap: still hierarchical, but highlighting quantities expressed by means of area size. It also includes circle packing variants.

    • word_cloud: clouds of words, where each instance’s size is proportional to its frequency in a related context

    • bars: includes bar charts, histograms, and variants. It coincides with “bar charts” in [7] but with a more generic term to refer to all bar-based visualisations.

    • line_chart: the display of information as sequential data points connected by straight-line segments.

    • area_chart: similar to a line chart but with a filled area below the segments. It also includes density plots.

    • pie_chart: circular graphs divided into slices, which can also use multi-level solutions.

    • plot_3d: plots that use a third dimension to encode an additional variable.

    • proportional_area: representations used to compare values through area size. Typically, using circle- or square-like shapes.

    • timeline: the display of a list of data points or spans in chronological order. They include timelines working either with a scale or simply displaying events in sequence. As in [3], we also include structured solutions resembling Gantt chart layouts.
    • other: it includes all other types of non-temporal visualisations that do not fall into the aforementioned categories.

    Temporal encodings. We identify techniques used to encode temporality (except for timelines, where temporal encoding is tacitly assumed). Columns:

    • temporal_dimension: to report when time is mapped to any dimension of a visualisation. We use the term “dimension” and not “axis” as in [3] as more appropriate for radial layouts or more complex representational choices.

    • animation: temporality is perceived through an animation changing the visualisation according to time flow.

    • visual_variable: another visual encoding strategy is used to represent any temporality-related variable (e.g. colour).

    Multi-type coordinated views. Tracking coordinated views across the dataset is limited to cases where multiple visualisation types can be clearly identified within a single view. For these instances, a dedicated column indicates which visualisation—if any—plays a central or dominant role:

    • primary_visualisation: contains the name of the visualisation technique (as defined in the corresponding column) that holds a dominant role in the coordinated view. If no single type can be considered guiding because multiple types have similar perceived importance, the column contains "NA".

    Interactions and allowable actions. A set of categories to assess affordable interactions based on the concept of user intent [2] and user-allowed actions [4]. The following categories roughly match the manipulative subset of methods in the conception of [5]. Only interactions that affect the aspect of the visualisation or the visual representation of its data points, symbols, and glyphs are taken into consideration. A two-level analysis is enabled by the columns, referring to interaction categories also explored at an aggregated project level in [1], and their values, exposing more specific interactive capabilities (multiple values are divided by a semicolumn). By interaction capabilities, we refer to the interactive possibilities offered by a visualisation system. Specifically, we adopt the term allowable actions [4] to denote the range of interactions users can perform to modify the representation. They include:

    • basic_selection: the demarcation of an element either for the duration of the interaction (highlight) or more permanently until the occurrence of another selection

  4. Submarine Cable Features Dataset

    • kaggle.com
    zip
    Updated Dec 18, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Devastator (2023). Submarine Cable Features Dataset [Dataset]. https://www.kaggle.com/datasets/thedevastator/submarine-cable-features-dataset
    Explore at:
    zip(15637 bytes)Available download formats
    Dataset updated
    Dec 18, 2023
    Authors
    The Devastator
    Description

    Submarine Cable Features Dataset

    Submarine Cable Features: Scale, Description, and Effective Dates

    By Homeland Infrastructure Foundation [source]

    About this dataset

    The Submarine Cables dataset provides a comprehensive collection of features related to submarine cables. It includes information such as the scale band, description, and effective dates of these cables. These data are specifically designed to support coastal planning at both regional and national scales.

    The dataset is derived from 2010 NOAA Electronic Navigational Charts (ENCs), along with 2009 NOAA Raster Navigational Charts (RNCs) which were updated in 2013 using the most recent RNCs as a reference point. The source material's scale varied significantly, resulting in discontinuities between multiple sources that were resolved with minimal spatial adjustments.

    Polyline features representing submarine cables were extracted from the original sources while excluding 'cable areas' noted within the data. The S-57 data model was modified for improved readability and performance purposes.

    Overall, this dataset provides valuable information regarding the occurrence and characteristics of submarine cables in and around U.S. navigable waters. It serves as an essential resource for coastal planning efforts at various geographic scales

    How to use the dataset

    Here's a guide on how to effectively utilize this dataset:

    1. Familiarize Yourself with the Columns

    The dataset contains multiple columns that provide important information:

    • scaleBand: This categorical column indicates the scale band of each submarine cable.
    • description: The text column provides a description of each submarine cable.
    • effectiveDate: Indicates the effective date of the information about each submarine cable.

    Understanding these columns will help you navigate and interpret the data effectively.

    2. Explore Scale Bands

    Start by analyzing the distribution of different scale bands in the dataset. The scale band categorizes submarines cables based on their size or capacity. Identifying patterns or trends within specific scale bands can provide valuable insights into how submarine cables are deployed.

    For example, you could analyze which scale bands are most commonly used in certain regions or countries, helping coastal planners understand infrastructure needs and potential connectivity gaps.

    3. Analyze Cable Descriptions

    The description column provides detailed information about each submarine cable's characteristics, purpose, or intended use. By examining these descriptions, you can uncover specific attributes related to each cable.

    This information can be crucial when evaluating potential impacts on marine ecosystems, identifying areas prone to damage or interference with other maritime activities, or understanding connectivity options for coastal regions.

    4. Consider Effective Dates

    While excluding dates from this analysis as per your request (as we exclude them here), effective dates play an important role in keeping track of when information about a particular cable was collected or updated.

    By considering effective dates over time: - You can monitor changes in infrastructure deployment strategies. - Identify areas where new cables have been installed. - Track outdated infrastructure that may need replacements or upgrades.

    5. Combine with Other Datasets

    To gain a comprehensive understanding and unlock deeper insights, consider integrating this dataset with other relevant datasets. For example: - Population density data can help identify areas in high need of improved connectivity. - Coastal environmental data can help assess potential ecological impacts of submarine cables.

    By merging datasets, you can explore relationships, draw correlations, and make more informed decisions based on the available information.

    6. Visualize the Data

    Create meaningful visualizations to better understand and communicate insights from the dataset. Utilize scatter plots, bar charts, heatmaps, or GIS maps

    Research Ideas

    • Coastal Planning: The dataset can be used for coastal planning at both regional and national scales. By analyzing the submarine cable features, planners can assess the impact of these cables on coastal infrastructure development and design plans accordingly.
    • Communication Network Analysis: The dataset can be utilized to analyze the connectivity and coverage of submarine cable networks. This information is valuable for telecommunications companies and network providers to understand gaps in communication infras...
  5. Hong Kong Social Contact Dynamics

    • kaggle.com
    zip
    Updated Feb 5, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Devastator (2023). Hong Kong Social Contact Dynamics [Dataset]. https://www.kaggle.com/datasets/thedevastator/hong-kong-social-contact-dynamics
    Explore at:
    zip(161226 bytes)Available download formats
    Dataset updated
    Feb 5, 2023
    Authors
    The Devastator
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Area covered
    Hong Kong
    Description

    Hong Kong Social Contact Dynamics

    Understanding Age, Gender and Network Dynamics

    By [source]

    About this dataset

    This dataset provides an in-depth look at the dynamics of social interaction, particularly in Hong Kong. It contains comprehensive information regarding individuals, households and interactions between individuals such as their ages, frequency and duration of contact, and genders. This data can be utilized to evaluate various social and economic trends, behaviors, as well as dynamics observed at different levels. For example, this data set is an ideal tool to recognize population-level trends such as age and gender diversification of contacts or investigate the structure of social networks in addition to the implications of contact patterns on health and economic outcomes. Additionally, it offers valuable insights into dissimilar groups of people including their permanent residence activities related to work or leisure by enabling one to understand their interactions along with contact dynamics within their respective populations. Ultimately this dataset is key for attaining a comprehensive understanding of social contact dynamics which are fundamental for grasping why these interactions are crucial in Hong Kong's society today

    More Datasets

    For more datasets, click here.

    Featured Notebooks

    • 🚨 Your notebook can be here! 🚨!

    How to use the dataset

    This dataset provides detailed information about the social contact dynamics in Hong Kong. With this dataset, it is possible to gain a comprehensive understanding of the patterns of various forms of social contact - from permanent residence and work contacts to leisure contacts. This guide will provide an overview and guidelines on how to use this dataset for analysis.

    Exploring Trends and Dynamics:

    To begin exploring the trends and dynamics of social contact in Hong Kong, start by looking at demographic factors such as age, gender, ethnicity, and educational attainment associated with different types of contacts (permanent residence/work/leisure). Consider the frequency and duration of contacts within these segments to identify any potential differences between them. Additionally, look at how these factors interact with each other – observe which segments have higher levels of interaction with each other or if there are any differences between different population groups based on their demographic characteristics. This can be done through visualizations such as line graphs or bar charts which can illustrate trends across timeframes or population demographics more clearly than raw numbers would alone.

    Investigating Social Networks:

    The data collected through this dataset also allows for investigation into social networks – understanding who connects with who in both real-life interactions as well as through digital channels (if applicable). Focus on analyzing individual or family networks rather than larger groups in order to get a clearer picture without having too much complexity added into the analysis time. Analyze commonalities among individuals within a network even after controlling for certain factors that could affect interaction such as age or gender – utilize clustering techniques for this step if appropriate– then focus on comparing networks between individuals/families overall using graph theory methods such as length distributions (the average number of relationships one has) , degrees (the number of links connected from one individual or family unit), centrality measures(identifying individuals who serve an important role bridging two different parts fo he network) etc., These methods will help provide insights into varying structures between large groups rather than focusing only on small-scale personal connections among friends / colleagues / relatives which may not always offer accurate portrayals due to their naturally limited scope

    Modeling Health Implications:

    Finally, consider modeling health implications stemming from these observed patterns– particularly implications that may not be captured by simpler measures like count per contact hour (which does not differentiate based on intensity). Take into account aspects like viral transmission risk by analyzing secondary effects generated from contact events captured in the data – things like physical proximity when multiple people meet up together over multiple days

    Research Ideas

    • Analyzing the age, gender and contact dynamics of different areas within Hong Kong to understand the local population trends and behavior.
    • Investigating the structure of social networks to study how patterns of contact vary among socio economic backgro...
  6. Airlines Flights Data

    • kaggle.com
    zip
    Updated Jul 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Data Science Lovers (2025). Airlines Flights Data [Dataset]. https://www.kaggle.com/datasets/rohitgrewal/airlines-flights-data
    Explore at:
    zip(2440299 bytes)Available download formats
    Dataset updated
    Jul 29, 2025
    Authors
    Data Science Lovers
    License

    http://opendatacommons.org/licenses/dbcl/1.0/http://opendatacommons.org/licenses/dbcl/1.0/

    Description

    📹Project 11 - Flights Data Analysis with Python, on YouTube - https://youtu.be/gu3Ot78j_Gc

    🖇️ Enroll in our Udemy course "Python Data Analytics Projects" - https://www.udemy.com/course/bigdata-analysis-python/?referralCode=F75B5F25D61BD4E5F161

    Airlines Flights Dataset for Different Cities

    The Flights Booking Dataset of various Airlines is a scraped datewise from a famous website in a structured format. The dataset contains the records of flight travel details between the cities in India. Here, multiple features are present like Source & Destination City, Arrival & Departure Time, Duration & Price of the flight etc.

    This data is available as a CSV file. We are going to analyze this data set using the Pandas DataFrame.

    This analysis will be helpful for those working in Airlines, Travel domain.

    Using this dataset, we answered multiple questions with Python in our Project.

    Q.1. What are the airlines in the dataset, accompanied by their frequencies?

    Q.2. Show Bar Graphs representing the Departure Time & Arrival Time.

    Q.3. Show Bar Graphs representing the Source City & Destination City.

    Q.4. Does price varies with airlines ?

    Q.5. Does ticket price change based on the departure time and arrival time?

    Q.6. How the price changes with change in Source and Destination?

    Q.7. How is the price affected when tickets are bought in just 1 or 2 days before departure?

    Q.8. How does the ticket price vary between Economy and Business class?

    Q.9. What will be the Average Price of Vistara airline for a flight from Delhi to Hyderabad in Business Class ?

    Enrol in our Udemy courses : 1. Python Data Analytics Projects - https://www.udemy.com/course/bigdata-analysis-python/?referralCode=F75B5F25D61BD4E5F161 2. Python For Data Science - https://www.udemy.com/course/python-for-data-science-real-time-exercises/?referralCode=9C91F0B8A3F0EB67FE67 3. Numpy For Data Science - https://www.udemy.com/course/python-numpy-exercises/?referralCode=FF9EDB87794FED46CBDF

    These are the main Features/Columns available in the dataset :

    1) Airline: The name of the airline company is stored in the airline column. It is a categorical feature having 6 different airlines.

    2) Flight: Flight stores information regarding the plane's flight code. It is a categorical feature.

    3) Source City: City from which the flight takes off. It is a categorical feature having 6 unique cities.

    4) Departure Time: This is a derived categorical feature obtained created by grouping time periods into bins. It stores information about the departure time and have 6 unique time labels.

    5) Stops: A categorical feature with 3 distinct values that stores the number of stops between the source and destination cities.

    6) Arrival Time: This is a derived categorical feature created by grouping time intervals into bins. It has six distinct time labels and keeps information about the arrival time.

    7) Destination City: City where the flight will land. It is a categorical feature having 6 unique cities.

    8) Class: A categorical feature that contains information on seat class; it has two distinct values: Business and Economy.

    9) Duration: A continuous feature that displays the overall amount of time it takes to travel between cities in hours.

    10) Days Left: This is a derived characteristic that is calculated by subtracting the trip date by the booking date.

    11) Price: Target variable stores information of the ticket price.

  7. Hulu Popular Shows Dataset

    • kaggle.com
    zip
    Updated Dec 3, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Devastator (2023). Hulu Popular Shows Dataset [Dataset]. https://www.kaggle.com/datasets/thedevastator/hulu-popular-shows-dataset
    Explore at:
    zip(359148 bytes)Available download formats
    Dataset updated
    Dec 3, 2023
    Authors
    The Devastator
    Description

    Hulu Popular Shows Dataset

    Dataset containing information on the top 1,000 most popular shows on Hulu

    By Chase Willden [source]

    About this dataset

    The Hulu Shows dataset is a comprehensive collection of information on the top 1,000 most popular shows available on the streaming platform Hulu. This dataset provides detailed insights into each show, including key details, availability, ratings, and other relevant information.

    The dataset aims to provide an objective analysis of Hulu's show offerings by offering a wide range of data points. It allows users to understand the diversity and popularity of shows available on Hulu and make informed decisions based on their preferences.

    Each entry in the dataset includes essential details about the shows like title, genre(s), runtime, release year, language(s), country of origin, description or summary of the plot. Additionally, it provides information about key cast members involved in each show.

    In terms of availability and accessibility for users interested in watching these shows on Hulu's platform are mentioned as well; this includes details such as whether a show is still ongoing or has ended its run. It also specifies whether all seasons are available for streaming or only selected seasons.

    Ratings play an important role when choosing what show to watch; therefore this dataset includes various rating metrics like IMDb rating (based on user ratings), Rotten Tomatoes critics score (based on professional reviews), Rotten Tomatoes audience score (based on viewer feedback), and Metacritic score (aggregated from multiple sources).

    To understand viewership trends further comprehensively - information about how many episodes are available for each show along with episode durations is included. This can give insights into binge-watching potential or evaluate if shorter episodes might be preferred over ones with longer durations.

    Furthermore - since user experiences regarding streaming quality matters - data regarding video resolution options (e.g., SD or HD) provided by Hulu for each specific series has been recorded too.

    Lastly - additional aspects worth considering while selecting which shows to invest their time could be knowing whether there are any parental warnings due to explicit content being present in certain programs. Similarly note if subtitles are available can encourage users with hearing impairments to explore the dataset and find suitable accessible content.

    The Hulu Shows dataset has been meticulously collated and organized to provide a comprehensive overview of the most popular shows on Hulu. This dataset can serve as a valuable resource for users, researchers, or analysts looking to evaluate the streaming platform's offerings and make informed decisions about their entertainment choices

    How to use the dataset

    Understanding the Columns

    Before diving into any analysis, it's crucial to understand the meaning of each column in the dataset. Here's a brief explanation of each column:

    • Show Name: The name/title of the show.
    • Genre(s): The genre(s) or category/categories to which the show belongs.
    • Run Time: The duration in minutes for each episode or average duration across episodes.
    • Number of Seasons: Total number of seasons available for the show.
    • Rating: Average viewer rating for the show ranging from 0-10 (provided by users).
    • Description: Brief summary or synopsis describing what the show is about.
    • Episodes: Numbered list containing episode names along with their respective release dates (if available).
    • Year Released: Year when the series was initially released.
    • IMDB Rating: Ratings provided by IMDB users on a scale from 0-10. 10:11:12:**Hulu Link**, :**Poster Link**, :**IMDB Link**, :**IMDB Poster Link**, : URL links providing access to additional information about each specific show.

    Exploring Different Genres

    One interesting aspect that can be explored using this dataset is analyzing different genres and their popularity on Hulu. You can create visualizations showing which genres have more shows available compared to others.

    For example:

    import pandas as pd
    import matplotlib.pyplot as plt
    
    # Load the dataset
    df = pd.read_csv(hulu_popular_shows_dataset.csv)
    
    # Count the number of shows per genre
    genre_counts = df['Genre(s)'].value_counts().sort_values(ascending=False)
    
    # Plot a bar chart to visualize the counts by genre
    plt.figure(figsize=(12, 6))
    genre_counts.plot(kind='bar')
    plt.title(Number of Shows per Genre on Hulu)
    plt.xlabel(Genre)
    plt...
    
  8. UFO sightings since 1906

    • kaggle.com
    zip
    Updated Feb 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hassan-sv (2025). UFO sightings since 1906 [Dataset]. https://www.kaggle.com/datasets/hassansv/ufo-sightings-since-1906
    Explore at:
    zip(5601313 bytes)Available download formats
    Dataset updated
    Feb 24, 2025
    Authors
    Hassan-sv
    License

    ODC Public Domain Dedication and Licence (PDDL) v1.0http://www.opendatacommons.org/licenses/pddl/1.0/
    License information was derived automatically

    Description

    Overview of the Dataset

    The UFO sightings dataset contains records of UFO sightings reported globally since 1906. The dataset includes the following columns:

    datetime: The date and time of the sighting.
    
    day: The day of the week when the sighting occurred.
    
    city: The city where the sighting was reported.
    
    state: The state or region where the sighting occurred.
    
    country: The country where the sighting was reported.
    
    shape: The shape or form of the UFO observed.
    
    duration (seconds): The duration of the sighting in seconds.
    
    duration (hours/min): The duration of the sighting in hours and minutes.
    
    comments: Additional comments or descriptions provided by the witness.
    
    day_posted: The day the sighting was reported or posted.
    
    date posted: The date the sighting was reported or posted.
    
    latitude: The latitude coordinate of the sighting location.
    
    longitude: The longitude coordinate of the sighting location.
    
    days_count: The number of days between the sighting and when it was posted
    

    Analysis Process

    Data Cleaning and Preparation (Excel):
    
      Removed duplicate entries and handled missing values.
    
      Standardized formats for dates, times, and categorical variables (e.g., shapes, countries).
    
      Calculated additional metrics such as days_count (time between sighting and posting).
    
    Exploratory Data Analysis (SQL):
    
      Aggregated data to analyze trends, such as the number of sightings per country, state, or city.
    
      Calculated average durations of sightings by UFO shape.
    
      Identified the most common UFO shapes and their distribution across countries.
    
      Analyzed temporal trends, such as sightings per day or over time.
    
    Visualization (Tableau):
    
      Created interactive dashboards to visualize key insights.
    
      Developed charts such as:
    
        Average Duration of Sightings by Shape: Highlighting which UFO shapes were observed for the longest durations.
    
        UFO Shapes by Country: Showing the distribution of UFO shapes across different countries.
    
        UFO Shapes Total: A global overview of the most commonly reported UFO shapes.
    
        UFO Sightings in All Countries: A map or bar chart showing the number of sightings per country.
    
        UFO Sightings per Day: A time series analysis of sightings over days.
    
        UFO Sightings in the USA: A focused analysis of sightings in the United States, broken down by state or city.
    

    Key Insights and Conclusions

    Most Common UFO Shapes:
    
      The most frequently reported UFO shapes include lights, circles, and triangles.
    
      These shapes are consistent across multiple countries, suggesting common patterns in UFO sightings.
    
    Geographical Distribution:
    
      The United States has the highest number of reported UFO sightings, followed by Canada and the United Kingdom.
    
      Within the U.S., states like California, Florida, and Texas report the most sightings.
    
    Temporal Trends:
    
      Sightings have increased significantly since the mid-20th century, with a peak in the 2000s.
    
      Certain days of the week (e.g., weekends) show higher reporting rates, possibly due to increased outdoor activity.
    
    Duration of Sightings:
    
      The average duration of sightings varies by shape. For example, cigar-shaped UFOs tend to be observed for longer periods compared to light or disk shapes.
    
      Most sightings last less than a minute, but some reports describe durations of several hours.
    
    Reporting Delays:
    
      The days_count column reveals that many sightings are reported weeks or even months after they occur, indicating potential delays in witness reporting or data collection.
    
    Global Patterns:
    
      While the U.S. dominates the dataset, other countries show unique patterns in terms of UFO shapes and sighting frequencies.
    
      For example, Australia and Germany report a higher proportion of triangular UFOs compared to other shapes.
    

    Recommendations for Further Analysis

    Geospatial Analysis: Use latitude and longitude data to create heatmaps of sightings and identify potential hotspots.
    
    Text Analysis: Analyze the comments column using natural language processing (NLP) to extract common themes or keywords.
    
    Correlation with External Data: Investigate whether UFO sightings correlate with astronomical events, military activity, or other phenomena.
    
    Machine Learning: Build predictive models to identify patterns or classify sightings based on shape, duration, or location.
    

    Conclusion

    The UFO sightings dataset provides a fascinating glimpse into global reports of unidentified flying objects. Through careful analysis, I identified key trends in UFO shapes, durations, and geographical distribution. The United States emerges as the epicenter of UFO sightings, with lights and ...

  9. US Recorded Music Revenue by Format

    • kaggle.com
    zip
    Updated Dec 19, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Devastator (2023). US Recorded Music Revenue by Format [Dataset]. https://www.kaggle.com/datasets/thedevastator/us-recorded-music-revenue-by-format
    Explore at:
    zip(21740 bytes)Available download formats
    Dataset updated
    Dec 19, 2023
    Authors
    The Devastator
    Area covered
    United States
    Description

    US Recorded Music Revenue by Format

    Recorded music revenue in the US by format and week 10

    By Throwback Thursday [source]

    About this dataset

    This dataset offers a comprehensive analysis of the recorded music revenue in the United States, specifically focusing on the 10th week of the year. The data is meticulously categorized based on different formats, shedding light on the diverse ways in which music is consumed and purchased by individuals. The dataset includes key columns that provide relevant information, such as Format, Year, Units, Revenue, and Revenue (Inflation Adjusted). These columns offer valuable insights into the specific format of music being consumed or purchased, the respective year in which this data was recorded, the number of units of music sold within each format category, and both the total revenue generated from sales and its corresponding inflation-adjustment amount. By analyzing this dataset with its extensive range of information about recorded music revenue in various formats during a specific week within a given year in the United States market context can help derive meaningful patterns and trends for industry professionals to make informed decisions regarding marketing strategies or investments

    How to use the dataset

    Introduction:

    • Familiarize Yourself with Columns:

      • Format: This column categorizes how music is consumed or purchased.
      • Year: This column represents the year when each data point was recorded.
      • Units: The number of units of music sold within a particular format during a given week.
      • Revenue: The total revenue generated from sales of music within a specific format during a given week.
      • Revenue (Inflation Adjusted): The total revenue generated from sales of music adjusted for inflation within a specific format during a given week.
    • Understanding Categorical Formats: In this dataset, formats refer to different ways in which music is consumed or purchased. Examples include physical formats like CDs and vinyl records, as well as digital formats such as downloads and streaming services.

    • Analyzing Trends over Time: By exploring data across multiple years, you can identify trends and patterns related to how formats have evolved over time. Use statistical techniques or visualization tools like line graphs or bar charts to gain insights into any fluctuations or consistent growth.

    • Comparing Units Sold vs Revenue Generated: Analyze both units sold and revenue generated columns simultaneously to understand if there are any significant differences between different formats' popularity versus their financial performance.

    • Examining Adjusted Revenue for Inflation Effects: Comparison between Revenue and Revenue (Inflation Adjusted) can provide insights into whether changes in revenue are due solely to changes in purchasing power caused by inflation or influenced by other factors affecting format popularity.

    • Identifying Format Preferences: Explore how units and revenue differ across various formats to determine whether consumer preferences are shifting towards digital formats or experiencing a resurgence in physical formats like vinyl.

    • Comparing Revenue Performance Between Formats: Use statistical analysis or data visualization techniques to compare revenue performance between different formats. Identify which format generates the highest revenue and whether there have been any changes in dominance over time.

    • Supplementary Research Opportunities: Combine this dataset with external sources on music industry trends, technological advancements, or major events like album releases to gain a deeper understanding of the factors influencing recorded music sales

    Research Ideas

    • Trend analysis: This dataset can be used to analyze the trends in recorded music revenue by format over the years. By examining the revenue and units sold for each format, one can identify which formats are growing in popularity and which ones are declining.
    • Comparison of revenue vs inflation-adjusted revenue: The dataset includes both total revenue and inflation-adjusted revenue for each format. This allows for a comparison of the actual revenue generated with the potential impact of inflation on that revenue. It can provide insights into whether the increase or decrease in revenue is solely due to changes in market demand or if it is influenced by changes in purchasing power.
    • Format preference analysis: By analyzing the units sold for each format, one can identify which formats are preferred by consumers during a particular week. This information can be useful for music industry professionals and marketers to under...
  10. American Time Use Survey: Daily Activities

    • kaggle.com
    zip
    Updated Dec 12, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Devastator (2023). American Time Use Survey: Daily Activities [Dataset]. https://www.kaggle.com/datasets/thedevastator/american-time-use-survey-daily-activities
    Explore at:
    zip(17763 bytes)Available download formats
    Dataset updated
    Dec 12, 2023
    Authors
    The Devastator
    Description

    American Time Use Survey: Daily Activities

    Americans' Daily Activities: Education, Employment, Gender, and Leisure Time

    By Throwback Thursday [source]

    About this dataset

    The American Time Use Survey dataset provides comprehensive information on how individuals in America allocate their time throughout the day. It includes various aspects of daily activities such as education level, age, employment status, gender, number of children, weekly earnings and hours worked. The dataset also includes data on specific activities individuals engage in like sleeping, grooming, housework, food and drink preparation, caring for children, playing with children, job searching, shopping and eating and drinking. Additionally it captures time spent on leisure activities like socializing and relaxing as well as engaging in specific hobbies such as watching television or golfing. The dataset also records the amount of time spent volunteering or running for exercise purposes.

    Each entry is organized based on categorical variables such as education level (ranging from lower levels to higher degrees), age (capturing different age brackets), employment status (including employed full-time or part-time), gender (male or female) and the number of children an individual has. Furthermore it provides information regarding an individual's weekly earnings and hours worked.

    This extensive dataset aims to provide insights into how Americans prioritize their time across various aspects of their lives. Whether it be focusing on work-related tasks or indulging in recreational activities,it offers a comprehensive look at the allocation of time among different demographic groups within American society.

    This dataset can be used for understanding trends in daily activity patterns across demographics groups over multiple years without directly referencing specific dates

    How to use the dataset

    How to use this dataset: American Time Use Survey - Daily Activities

    Welcome to the American Time Use Survey dataset! This dataset provides valuable information on how Americans spend their time on a daily basis. Here's a guide on how to effectively utilize this dataset for your analysis:

    • Familiarize yourself with the columns:

      • Education Level: The level of education attained by the individual.
      • Age: The age of the individual.
      • Age Range: The age range the individual falls into.
      • Employment Status: The employment status of the individual.
      • Gender: The gender of the individual.
      • Children: The number of children that an individual has.
      • Weekly Earnings: The amount of money earned by an individual on a weekly basis.
      • Year: The year in which the data was collected.
      • Weekly Hours Worked: The number of hours worked by an individual on a weekly basis.
    • Identify variables related to daily activities: This dataset provides information about various daily activities undertaken by individuals. Some important variables related to daily activities include:

      • Sleeping
      • Grooming
      • Housework
      • Food & Drink Prep
      • Caring for Children
      • Playing with Children
      • Job Searching …and many more!
    • Analyze time spent on different activities: This dataset includes numerical values representing time spent in minutes for specific activities such as sleeping, grooming, housework, food and drink preparation, etc. You can use this data to analyze and compare how different groups of individuals allocate their time throughout the day.

    • Explore demographic factors: In addition to daily activities, this dataset also includes columns such as education level, age range, employment status, gender, and number of children. You can cross-reference these demographic factors with activity data to gain insights into how different population subgroups spend their time differently.

    • Identify trends and patterns: You can use this dataset to identify trends and patterns in how Americans allocate their time over the years. By analyzing data from different years, you may discover changes in certain activities and how they relate to demographic factors or societal shifts.

    • Visualize the data: Creating visualizations such as bar graphs, line plots, or pie charts can provide a clear representation of how time is allocated for different activities among various groups of individuals. Visualizations help in understanding the distribution of time spent on different activities and identifying any significant differences or similarities across demographics.

    Remember that each column represents a specific variable, whi...

  11. Hospital Excel Dataset

    • kaggle.com
    zip
    Updated Apr 17, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Omolola Labiyi (2025). Hospital Excel Dataset [Dataset]. https://www.kaggle.com/datasets/t0ut0u/hospital-excel-dataset
    Explore at:
    zip(18209846 bytes)Available download formats
    Dataset updated
    Apr 17, 2025
    Authors
    Omolola Labiyi
    License

    https://cdla.io/sharing-1-0/https://cdla.io/sharing-1-0/

    Description

    Healthcare administrators constantly face difficult questions about costs, patient volume, and resource allocation. Understanding how long patients stay, when admissions spike, and how treatment costs fluctuate can help hospitals plan staffing, negotiate insurance contracts, and manage operational efficiency. In this project, I analyzed hospital admissions and treatment cost data to identify patterns in patient stays, insurance coverage, and seasonal trends. The goal was to explore how hospitals could use data to better understand operational demand and financial performance. Using Microsoft Excel, I conducted exploratory analysis on hospital admission records that included patient demographics, hospital locations, insurance providers, treatment costs, and length of stay. Through PivotTables, formulas, and visualizations, I transformed raw data into insights that reveal how patient volume, insurance distribution, and treatment costs vary across hospitals and over time.

    Dataset Overview The dataset includes information on: • Patient admissions across multiple hospitals • Insurance providers and coverage distribution • Hospital stay durations • Treatment cost per day • Monthly admission trends These variables allowed for analysis of both operational hospital metrics and financial performance indicators.

    Analysis Approach To explore the dataset, I used Excel tools to summarize large volumes of hospital data and identify patterns. Techniques used included: • PivotTables to aggregate hospital admissions and insurance provider distribution • Conditional formatting to highlight cost changes across time periods • Bar, pie, and line charts to visualize operational trends • Calculations to measure average length of stay and daily treatment costs These tools allowed me to quickly transform transactional hospital data into meaningful visual insights.

    Key Findings Cost per Day Trends The average daily treatment cost across hospitals was $3,386.40. Costs fluctuated throughout the year, with the highest treatment costs occurring in September. This spike may reflect seasonal demand for medical procedures, insurance billing cycles, or higher treatment complexity. Following this peak, treatment costs declined during October and November, suggesting a potential normalization in hospital utilization or procedure volume.

    Average Length of Stay Across all hospitals, the average patient stay was approximately 16 days. Monthly variations were relatively small but still informative: • April recorded the longest stays, averaging about 15.75 days. • September recorded the shortest stays, averaging about 15.22 days. This distribution suggests that patient stay durations remain relatively stable across the year, though certain months may involve more complex cases or slower discharge cycles.

    Insurance Provider Distribution Insurance coverage varied significantly across the patient population. • Cigna covered the largest share of patients, accounting for approximately 20.27% of hospital admissions. • Aetna had the lowest patient share, suggesting smaller network presence or limited coverage in the hospitals represented in the dataset. Additionally, patient length of stay varied slightly by insurance provider. Patients covered by Medicare stayed an average of 15.63 days, while Aetna patients averaged 15.45 days. These differences may reflect variations in patient demographics, treatment complexity, or insurance policy structures.

    Seasonal Admission Patterns Admissions also followed a seasonal pattern. • August recorded the highest number of hospital admissions, potentially reflecting increased elective procedures or seasonal health conditions. • February had the lowest admission levels, suggesting lower procedural demand or fewer emergency cases. Understanding these seasonal trends could help hospitals better plan staffing levels and manage resource allocation during high-demand periods.

    Business Insights The analysis highlights several opportunities for hospital administrators and healthcare planners: • Rising treatment costs may require closer monitoring of hospital billing practices and insurance reimbursement structures. • Seasonal admission trends can help hospitals anticipate demand and allocate staff more effectively. • Insurance provider distribution may influence strategic partnerships between hospitals and insurers. • Monitoring length-of-stay trends can help identify operational inefficiencies or opportunities to improve discharge planning. By leveraging simple analytical tools in Excel, hospital operations teams can uncover valuable insights that support more informed planning and decision-making.

    Skills Demonstrated • Data exploration and cleaning in Excel • PivotTable-based analysis • Trend analysis and statistical summaries • Data visualization using charts and dashboards • Healthcare operational data analysis • Translating raw data into actionable insights

  12. Women's Football (European Leagues)

    • kaggle.com
    zip
    Updated Dec 8, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Devastator (2022). Women's Football (European Leagues) [Dataset]. https://www.kaggle.com/datasets/thedevastator/uncovering-female-football-success-in-top-europe
    Explore at:
    zip(379479 bytes)Available download formats
    Dataset updated
    Dec 8, 2022
    Authors
    The Devastator
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Area covered
    Europe
    Description

    Women's Football (European Leagues)

    Team and Player Performance Statistics

    By [source]

    About this dataset

    This dataset includes comprehensive female football-related performance data and player statistics from the top 5 European leagues: Serie A in Italy, Liga Femenina in Spain, Women's Super League in England, Bundesliga Frauen in Germany, and Division 1 Feminin in France. Gathered throughout each season of the respective leagues, the dataset tracks teams, players, matches and a range of important performance metrics. The recently released data provides intriguing insight into team success and player form - covering parameters such as goals scored per game (xGHome), clean sheets (CS), number of opponents' passes allowed (Sweeper_#OPA) as well as individual performance stats such as tackles made per goal kick (Crosses_Stp). Analyze this insightful data to gain further insight on how female football is developing across Europe's major leagues!

    More Datasets

    For more datasets, click here.

    Featured Notebooks

    • 🚨 Your notebook can be here! 🚨!

    How to use the dataset

    This dataset can be used to analyze and compare the performance of teams and players across the top five European leagues: Serie A in Italy, Liga Femenina in Spain, Women's Super League in England, Bundesliga Frauen in Germany, and Division 1 Feminin in France. The dataset provides records of each individual match that occurred within these leagues during the tracked season(s), as well as a range of performance metrics for both teams and players.

    To use this dataset effectively it is important to understand which columns are available, as described above. By exploring different combinations of team-level versus player-level data you will be able to identify correlations between certain performance metrics for teams or players that provide insights about female football success across Europe.

    Once you’re ready to start exploring the data there are several approaches you may take from visualizing your data via bar or line graphs with Python Matplotlib or Seaborn packages; correlating team-level versus player-level statistics such as number of wins (W) compared against goalkeeper saves (Saves); or performing more complicated regression analyses on your data that explore how different features like time played (Min) can predict goals scored (Goals_FK). Each approach provides unique insights into trends within female football success.

    No matter how you choose to analyze this dataset it is important to note that trendlines may shift from year-to year -- so make sure you use consistent periods when comparing changes between seasons! It is also helpful to break down aggregate results by country when analyzing different trends across Europe so consider running separate analyses for each country instead aggregating them all together at once. Using this stepwise approach we hope that through careful exploration of the female football success will begin ‘uncovering’!

    Research Ideas

    • Analyzing the effect of player performance metrics on team success and vice versa: Using this dataset, it is possible to analyze how changes in different player performance metrics might affect overall team performance (e.g. goals scored or allowed, clean sheets). With further analysis, correlations can be drawn between teams’ and players’ performances under different match-day conditions such as travel distance or surface type.

    • Examining trends in the development of female football: This data set spans multiple seasons, making it possible to evaluate any general trends in aspects such as the average age of the players across countries and how that affects their performances; or identifying any underused opportunities available for young talented footballers in specific countries which could be benefitted from improvisations by these countries' governing bodies;

    • Benchmark positions used among teams versus outside experts’ opinions: One clever use for this dataset can be to compare positional performances between expert opinions from scouts with actual field results from teams using those positions within each country's top leagues and analyzing areas where consensus is reached upon versus discrepancies found throughout the analyzed data samples . For example, one may cross-examine national team call up rosters with squad selections for clubs’ top female divisions - finding anomalies not spotted prior by those making roster decisions - thereby potentially deriving more informed decisions with regards to selecting position holders based on tangible facts rather than focusing merely on biased subjective eye tests over which player should officially take ...

  13. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
esri_en (2021). Chart Viewer [Dataset]. https://city-of-lawrenceville-arcgis-hub-lville.hub.arcgis.com/items/be4582b38d764de0a970b986c824acde

Chart Viewer

Explore at:
Dataset updated
Sep 22, 2021
Dataset authored and provided by
esri_en
Description

Use the Chart Viewer template to display bar charts, line charts, pie charts, histograms, and scatterplots to complement a map. Include multiple charts to view with a map or side by side with other charts for comparison. Up to three charts can be viewed side by side or stacked, but you can access and view all the charts that are authored in the map. Examples: Present a bar chart representing average property value by county for a given area. Compare charts based on multiple population statistics in your dataset. Display an interactive scatterplot based on two values in your dataset along with an essential set of map exploration tools. Data requirements The Chart Viewer template requires a map with at least one chart configured. Key app capabilities Multiple layout options - Choose Stack to display charts stacked with the map, or choose Side by side to display charts side by side with the map. Manage chart - Reorder, rename, or turn charts on and off in the app. Multiselect chart - Compare two charts in the panel at the same time. Bookmarks - Allow users to zoom and pan to a collection of preset extents that are saved in the map. Home, Zoom controls, Legend, Layer List, Search Supportability This web app is designed responsively to be used in browsers on desktops, mobile phones, and tablets. We are committed to ongoing efforts towards making our apps as accessible as possible. Please feel free to leave a comment on how we can improve the accessibility of our apps for those who use assistive technologies.

Search
Clear search
Close search
Google apps
Main menu