Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Abstract To break with the traditional model of Basic Statistics classes in Higher Education, we sought on Statistical Literacy and Critical Education to develop an activity about graphic interpretation, which took place in a Virtual Learning Environment (VLE), as a complement to classroom meetings. Twenty-three engineering students from a public higher education institution in Rio de Janeiro took part in the research. Our objective was to analyze elements of graphic comprehension in an activity that consisted of identifying incorrect statistical graphs, conveyed by the media, followed by argumentation and interaction among students about these errors. The main results evidenced that elements of the Graphic Sense were present in the discussions and were the goal of the students' critical analysis. The VLE was responsible for facilitating communication, fostering student participation, and linguistic writing, so the use of digital technologies and activities favored by collaboration and interaction are important for statistical development, but such construction is a gradual process.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1430847%2F29f7950c3b7daf11175aab404725542c%2FGettyImages-1187621904-600x360.jpg?generation=1601115151722854&alt=media" alt="">
Data visualization is the graphical representation of information and data. By using visual elements like charts, graphs, and maps, data visualization tools provide an accessible way to see and understand trends, outliers, and patterns in data.
In the world of Big Data, data visualization tools and technologies are essential to analyze massive amounts of information and make data-driven decisions
32 cheat sheets: This includes A-Z about the techniques and tricks that can be used for visualization, Python and R visualization cheat sheets, Types of charts, and their significance, Storytelling with data, etc..
32 Charts: The corpus also consists of a significant amount of data visualization charts information along with their python code, d3.js codes, and presentations relation to the respective charts explaining in a clear manner!
Some recommended books for data visualization every data scientist's should read:
In case, if you find any books, cheat sheets, or charts missing and if you would like to suggest some new documents please let me know in the discussion sections!
A kind request to kaggle users to create notebooks on different visualization charts as per their interest by choosing a dataset of their own as many beginners and other experts could find it useful!
To create interactive EDA using animation with a combination of data visualization charts to give an idea about how to tackle data and extract the insights from the data
Feel free to use the discussion platform of this data set to ask questions or any queries related to the data visualization corpus and data visualization techniques
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
This file is the data set form the famous publication Francis J. Anscombe "*Graphs in Statistical Analysis*", The American Statistician 27 pp. 17-21 (1973) (doi: 10.1080/00031305.1973.10478966). It consists of four data sets of 11 points each. Note the peculiarity that the same 'x' values are used for the first three data sets, and I have followed this exactly as in the original publication (originally done to save space), i.e. the first column (x123) serves as the 'x' for the next three 'y' columns; y1, y2 and y3.
In the dataset Anscombe_quintet_data.csv there is a new column (y5) as an example of Simpson's paradox (C. McBride Ellis "*Anscombe dataset No. 5: Simpson's paradox*", Zenodo doi: 10.5281/zenodo.15209087 (2025)
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Statistics about the DBpedia SPARQL logs used.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Use of vectors in financial graphs By using mathematical vectors calculations as financial modeling then further into a new form of quantitative analysis instrument for linear financial computation graphs. A new tool in financial data analysis as an indicator.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This data set is perfect for practicing your analytical skills for Power BI, Tableau, Excel, or transform it into a CSV to practice SQL.
This use case mimics transactions for a fictional eCommerce website named EverMart Online. The 3 tables in this data set are all logically connected together with IDs.
My Power BI Use Case Explanation - Using Microsoft Power BI, I made dynamic data visualizations for revenue reporting and customer behavior reporting.
Revenue Reporting Visuals - Data Card Visual that dynamically shows Total Products Listed, Total Unique Customers, Total Transactions, and Total Revenue by Total Sales, Product Sales, or Categorical Sales. - Line Graph Visual that shows Total Revenue by Month of the entire year. This graph also changes to calculate Total Revenue by Month for the Total Sales by Product and Total Sales by Category if selected. - Bar Graph Visual showcasing Total Sales by Product. - Donut Chart Visual showcasing Total Sales by Category of Product.
Customer Behavior Reporting Visuals - Data Card Visual that dynamically shows Total Products Listed, Total Unique Customers, Total Transactions, and Total Revenue by Total or by continent selected on the map. - Interactive Map Visual showing key statistics for the continent selected. - The key statistics are presented on the tool tip when you select a continent, and the following statistics show for that continent: - Continent Name - Customer Total - Percentage of Products Sold - Percentage of Total Customers - Percentage of Total Transactions - Percentage of Total Revenue
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset has been created in the framework of the Plastic Twist project (Ptwist) and more specifically using the Ptwist crowdsourcing application (crowdsourcing.plastictwist.com/). We are sharing the edge list and specific node attributes (hashtags) of Twitter users posting about plastic pollution. The dataset can be used for community detection,clustering, node importance, influence maximization tasks, etc. Each user is represented by a unique integer which has nothing to do with the official Twitter user ID. The dataset contains three (3) files: ptwist.edgelist: A list containing all the 1,362,863 edges between the users. When loaded they create an undirected graph of 800K+ users. node_attributes.txt: This file contains information about the hashtags used by each user. (e.g. "652003": ["SingleUsePlastic"] -> user 6529003 has used the hashtag SingleUsePlastic) annotated_graph: A pickle file which, when loaded, returns a NetworkX node attributed undirected graph.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
A graph-structured knowledge base known as a Knowledge Graph (KG) consists of a terminology (vocabulary or ontology) along with interconnected data entities, all based on semantic web technologies like RDF and SPARQL. Knowledge Graphs represent an important and powerful tool for achieving interoperability in and across research domains and fulfilling the mission of the NFDI (National Research Data Infrastructure). Several consortia are building their own KG solutions, embedded in their overall data management strategy. The Working Group "Knowledge Graphs" (WG KGs) was established in the NFDI Section "(Meta)data, Terminologies, Provenance" to coordinate the development and use of KGs in all NFDI consortia. It has carried out an evaluation of the state of the art of KG adoption as well as the need for additional support in the NFDI. This led to the development of the KGI4NFDI (Knowledge Graph Infrastructure for the German National Research Data Infrastructure) service proposal which will support NFDI consortia by providing guidance and documentation around development practices as well as software dedicated to the creation and (re)use of KGs, including tools for data import, validation and export, collaborative frontends, search APIs, SPARQL endpoints, and tools for visualizing query results. Besides decentralised tooling, the service will establish a registry of KGs utilized by NFDI consortia, which will be presented in the form of its own KG. Alongside this, the service will devise an interoperability strategy, conduct surveys, and demonstrate the application of KGs across various research fields and scenarios using diverse methods. The implementation will leverage a widely-used FLOSS technology stack to ensure maximum reusability and sustainability, with any generated solutions being made available under open source and content licenses for others to benefit from. This presentation will outline the mission and core objective of KGI4NFDI that will be developed starting in the summer of 2024. We would like to use this opportunity to gather feedback as well as engage with the NFDI and research community.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Statistics about the Wikidata SPARQL logs used.
Facebook
TwitterThe total amount of data created, captured, copied, and consumed globally is forecast to increase rapidly. While it was estimated at ***** zettabytes in 2025, the forecast for 2029 stands at ***** zettabytes. Thus, global data generation will triple between 2025 and 2029. Data creation has been expanding continuously over the past decade. In 2020, the growth was higher than previously expected, caused by the increased demand due to the coronavirus (COVID-19) pandemic, as more people worked and learned from home and used home entertainment options more often.
Facebook
Twitterhttps://www.cognitivemarketresearch.com/privacy-policyhttps://www.cognitivemarketresearch.com/privacy-policy
According to Cognitive Market Research, the global semantic knowledge graphing market size is USD 1512.2 million in 2024 and will expand at a compound annual growth rate (CAGR) of 14.80% from 2024 to 2031.
North America held the major market of around 40% of the global revenue with a market size of USD 604.88 million in 2024 and will grow at a compound annual growth rate (CAGR) of 13.0% from 2024 to 2031.
Europe accounted for a share of over 30% of the global market size of USD 453.66 million.
Asia Pacific held the market of around 23% of the global revenue with a market size of USD 347.81 million in 2024 and will grow at a compound annual growth rate (CAGR) of 16.8% from 2024 to 2031.
Latin America market of around 5% of the global revenue with a market size of USD 75.61 million in 2024 and will grow at a compound annual growth rate (CAGR) of 14.2% from 2024 to 2031.
Middle East and Africa held the major market of around 2% of the global revenue with a market size of USD 30.24 million in 2024 and will grow at a compound annual growth rate (CAGR) of 14.5% from 2024 to 2031.
The natural language processing knowledge graphing held the highest growth rate in semantic knowledge graphing market in 2024.
Market Dynamics of Semantic Knowledge Graphing Market
Key Drivers of Semantic Knowledge Graphing Market
Growing Volumes of Structured, Semi-structured, and Unstructured Data to Increase the Global Demand
The global demand for semantic knowledge graphing is escalating in response to the exponential growth of structured, semi-structured, and unstructured data. Enterprises are inundated with vast amounts of data from diverse sources such as social media, IoT devices, and enterprise applications. Structured data from databases, semi-structured data like XML and JSON, and unstructured data from documents, emails, and multimedia files present significant challenges in terms of organization, analysis, and deriving actionable insights. Semantic knowledge graphing addresses these challenges by providing a unified framework for representing, integrating, and analyzing disparate data types. By leveraging semantic technologies, businesses can unlock the value hidden within their data, enabling advanced analytics, natural language processing, and knowledge discovery. As organizations increasingly recognize the importance of harnessing data for strategic decision-making, the demand for semantic knowledge graphing solutions continues to surge globally.
Demand for Contextual Insights to Propel the Growth
The burgeoning demand for contextual insights is propelling the growth of semantic knowledge graphing solutions. In today's data-driven landscape, businesses are striving to extract deeper contextual meaning from their vast datasets to gain a competitive edge. Semantic knowledge graphing enables organizations to connect disparate data points, understand relationships, and derive valuable insights within the appropriate context. This contextual understanding is crucial for various applications such as personalized recommendations, predictive analytics, and targeted marketing campaigns. By leveraging semantic technologies, companies can not only enhance decision-making processes but also improve customer experiences and operational efficiency. As industries across sectors increasingly recognize the importance of contextual insights in driving innovation and business success, the adoption of semantic knowledge graphing solutions is poised to witness significant growth. This trend underscores the pivotal role of semantic technologies in unlocking the true potential of data for strategic advantage in today's dynamic marketplace.
Restraint Factors Of Semantic Knowledge Graphing Market
Stringent Data Privacy Regulations to Hinder the Market Growth
Stringent data privacy regulations present a significant hurdle to the growth of the Semantic Knowledge Graphing market. Regulations such as GDPR (General Data Protection Regulation) in Europe and CCPA (California Consumer Privacy Act) in the United States impose strict requirements on how organizations collect, store, process, and share personal data. Compliance with these regulations necessitates robust data protection measures, including anonymization, encryption, and access controls, which can complicate the implementation of semantic knowledge graphing systems. Moreover, concerns about data breach...
Facebook
TwitterBy Throwback Thursday [source]
The dataset contains multiple columns that provide specific information for each year recorded. The column labeled Year indicates the specific year in which the data was recorded. The Pieces of Mail Handled column shows the total number of mail items that were processed or handled in a given year.
Another important metric is represented in the Number of Post Offices column, revealing the total count of post offices that were operational during a specific year. This information helps understand how postal services and infrastructure have evolved over time.
Examining financial aspects, there are two columns: Income and Expenses. The former represents the total revenue generated by the US Mail service in a particular year, while the latter showcases the expenses incurred by this service during that same period.
The dataset titled Week 22 - US Mail - 1790 to 2017.csv serves as an invaluable resource for researchers, historians, and analysts interested in studying trends and patterns within the US Mail system throughout its extensive history. By utilizing this dataset's wide range of valuable metrics, users can gain insights into how mail volume has changed over time alongside fluctuations in post office numbers and financial performance
Familiarize yourself with the columns:
- Year: This column represents the specific year in which data was recorded. It is represented by numeric values.
- Pieces of Mail Handled: This column indicates the number of mail items processed or handled in a given year. It is also represented by numeric values.
- Number of Post Offices: Here, you will find information on the total count of post offices in operation during a specific year. Like other columns, it consists of numeric values.
- Income: The Income column displays the total revenue generated by the US Mail service in a particular year. Numeric values are used to represent this data.
- Expenses: This column shows the total expenses incurred by the US Mail service for a particular year. Similar to other columns, it uses numeric values.
Understand data relationships: By exploring and analyzing different combinations of columns, you can uncover interesting patterns and relationships within mail statistics over time. For example:
Relationship between Year and Pieces of Mail Handled/Number of Post Offices/Income/Expenses: Analyzing these variables over years will allow you to observe trends such as increasing mail volume alongside changes in post office numbers or income and expenses patterns.
Relationship between Pieces of Mail Handled and Number Postal Office: By comparing these two variables across different years, you can assess if there is any correlation between mail volume growth and changes in post office counts.
Visualization:
To gain better insights into this vast amount of data visually, consider making use graphs or plots beyond just numerical analysis. You can use tools like Matplotlib, Seaborn, or Plotly to create various types of visualizations:
- Time-series line plots: Visualize the change in Pieces of Mail Handled, Number of Post Offices, Income, and Expenses over time.
- Scatter plots: Identify potential correlations between different variables such as Year and Pieces of Mail Handled/Number of Post Offices/Income/Expenses.
Drawing conclusions:
This dataset presents an extraordinary opportunity to learn about the history and evolution of the US Mail service. By examining various factors together or individually throughout time, you can draw conclusions about
- Trend Analysis: The dataset can be used to analyze the trends and patterns in mail volume, post office numbers, income, and expenses over time. This can help identify any significant changes or fluctuations in these variables and understand the factors that may have influenced them.
- Benchmarking: By comparing the performance of different years or periods, this dataset can be used for benchmarking purposes. For example, it can help assess how efficiently post offices have been handling mail items by comparing the number of pieces of mail handled with the corresponding expenses incurred.
- Forecasting: Based on historical data on mail volume and revenue generation, this dataset can be used for forecasting future trends. This could be valuable for planning purposes, such as determining resource allocation or projecting financial o...
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
India: Fertilizer use, kg per hectare of arable land: The latest value from 2023 is 199.1 kg per hectare of arable land, an increase from 193.8 kg per hectare of arable land in 2022. In comparison, the world average is 153.7 kg per hectare of arable land, based on data from 187 countries. Historically, the average for India from 1961 to 2023 is 86 kg per hectare of arable land. The minimum value, 2.2 kg per hectare of arable land, was reached in 1961 while the maximum of 210.7 kg per hectare of arable land was recorded in 2020.
Facebook
TwitterOpen Data Commons Attribution License (ODC-By) v1.0https://www.opendatacommons.org/licenses/by/1.0/
License information was derived automatically
As important carriers of innovation activities, patents, sci-tech achievements and papers play an increasingly prominent role in national political and economic development under the background of a new round of technological revolution and industrial transformation. However, in a distributed and heterogeneous environment, the integration and systematic description of patents, sci-tech achievements and papers data are still insufficient, which limits the in-depth analysis and utilization of related data resources. The dataset of knowledge graph construction for patents, sci-tech achievements and papers is an important means to promote innovation network research, and is of great significance for strengthening the development, utilization, and knowledge mining of innovation data. This work collected data on patents, sci-tech achievements and papers from China's authoritative websites spanning the three major industries—agriculture, industry, and services—during the period 2022-2025. After processes of cleaning, organizing, and normalization, a patents-sci-tech achievements-papers knowledge graph dataset was formed, containing 10 entity types and 8 types of entity relationships. To ensure quality and accuracy of data, the entire process involved strict preprocessing, semantic extraction and verification, with the ontology model introduced as the schema layer of the knowledge graph. The dataset establishes direct correlations among patents, sci-tech achievements and papers through inventors/contributors/authors, and utilizes the Neo4j graph database for storage and visualization. The open dataset constructed in this study can serve as important foundational data for building knowledge graphs in the field of innovation, providing structured data support for innovation activity analysis, scientific research collaboration network analysis and knowledge discovery.The dataset consists of two parts. The first part includes three Excel tables: 1,794 patent records with 10 fields, 181 paper records with 7 fields, and 1,156 scientific and technological achievement records with 11 fields. The second part is a knowledge graph dataset in CSV format that can be imported into Neo4j, comprising 10 entity files and 8 relationship files.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Romania - Last internet use: in last 3 months was 91.29% in December of 2024, according to the EUROSTAT. Trading Economics provides the current actual value, an historical data chart and related indicators for Romania - Last internet use: in last 3 months - last updated from the EUROSTAT on December of 2025. Historically, Romania - Last internet use: in last 3 months reached a record high of 91.29% in December of 2024 and a record low of 36.00% in December of 2010.
Facebook
TwitterThis graph displays how important social identity is to adults in Great Britain in 2017 by age group. The survey showed that ** percent of ***** year olds believe social identity is important, which is * percentage points higher than those aged 50 years and older. The majority of those aged 25 to 64 believe social identity is not important.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Time series data for the statistic Pillar 1 - Data Use - Score and country Oman. Indicator Definition:Indicators that capture demand side of statistical system
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Kenya: Fertilizer use, kg per hectare of arable land: The latest value from 2023 is 50.5 kg per hectare of arable land, an increase from 33.2 kg per hectare of arable land in 2022. In comparison, the world average is 153.7 kg per hectare of arable land, based on data from 187 countries. Historically, the average for Kenya from 1961 to 2023 is 25.4 kg per hectare of arable land. The minimum value, 3.2 kg per hectare of arable land, was reached in 1961 while the maximum of 59.5 kg per hectare of arable land was recorded in 2017.
Facebook
Twitterhttps://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
As per our latest research, the global Cross‑Domain Lineage Graphs market size stood at USD 1.13 billion in 2024, reflecting the growing significance of data governance and enterprise data transparency across industries. The market is projected to expand at a CAGR of 21.7% from 2025 to 2033, reaching an estimated USD 8.54 billion by 2033. This robust growth is primarily fueled by the rising demand for comprehensive data lineage solutions that can seamlessly operate across multiple domains, ensuring regulatory compliance, data security, and operational efficiency in increasingly complex IT environments.
The primary growth driver for the Cross‑Domain Lineage Graphs market is the escalating complexity of enterprise data ecosystems. Organizations are generating and consuming data across a multitude of platforms, applications, and domains, necessitating advanced lineage solutions that provide end-to-end visibility. As data flows become more convoluted due to cloud migrations, hybrid infrastructures, and the proliferation of SaaS applications, businesses are compelled to adopt cross-domain lineage graphs to track data movement, transformations, and dependencies. This capability not only enhances operational transparency but also enables organizations to swiftly identify and remediate data quality issues, thereby supporting more informed decision-making and fostering trust in enterprise data assets.
Another critical factor propelling market growth is the intensifying regulatory landscape across industries such as BFSI, healthcare, and government. Stringent data privacy and security regulations, including GDPR, HIPAA, and CCPA, require organizations to demonstrate comprehensive data lineage and audit trails. Cross‑Domain Lineage Graphs facilitate this by providing a unified view of data flow and transformations, regardless of where data resides or how it is processed. This is particularly vital for organizations operating in highly regulated sectors, as it helps mitigate compliance risks, avoid costly penalties, and maintain customer trust. The growing emphasis on data governance and risk management is expected to sustain high demand for these solutions over the forecast period.
Technological advancements are also playing a pivotal role in shaping the Cross‑Domain Lineage Graphs market. The integration of artificial intelligence, machine learning, and automation into lineage graph solutions is enabling more accurate, scalable, and real-time tracking of data across heterogeneous environments. These innovations are reducing the complexity and cost of deploying lineage solutions, making them accessible to a broader range of organizations, including small and medium enterprises. Furthermore, the adoption of cloud-based deployment models is accelerating, as they offer scalability, flexibility, and lower total cost of ownership. These technological trends are expected to further catalyze market expansion, opening up new avenues for vendors and end-users alike.
From a regional perspective, North America continues to dominate the Cross‑Domain Lineage Graphs market, driven by the presence of major technology vendors, early adoption of advanced data management practices, and stringent regulatory requirements. However, Asia Pacific is emerging as the fastest-growing region, fueled by rapid digital transformation, increasing investments in IT infrastructure, and rising awareness of data governance. Europe also holds a significant share, supported by robust data protection laws and a mature enterprise landscape. Meanwhile, Latin America and the Middle East & Africa are witnessing steady growth as organizations in these regions recognize the strategic importance of data lineage in supporting business intelligence and compliance initiatives. This global momentum underscores the critical role that cross-domain lineage solutions play in the modern data-driven enterprise.
The Cross‑Domain Lineage Graphs market is segmented by component into software and services, each playing a distinct yet complementary role in delivering comprehensive lineage solutions. The software segment, which comprises standalone lineage graph platforms and integrated modules within broader data management suites, accounts for the largest market share. This dominance is attributed to the increasing need for automated, real-time data tracking and visualization tools that can scale with enterprise da
Facebook
TwitterBy Gabe Salzer [source]
This dataset contains essential performance statistics for NBA rookies from 1980-2016. Here you can find minute per game stats, points scored, field goals made and attempted, three-pointers made and attempted, free throws made and attempted (with the respective percentages for each), offensive rebounds, defensive rebounds, assists, steals blocks turnovers efficiency rating and Hall of Fame induction year. It is organized in descending order by minutes played per game as well as draft year. This Kaggle dataset is an excellent resource for basketball analysts to gain a better understanding of how rookies have evolved over the years—from their stats to how they were inducted into the Hall of Fame. With its great detail on individual players' performance data this dataset allows you to compare their performances against different eras in NBA history along with overall trends in rookie statistics. Compare rookies drafted far apart or those that played together- whatever your goal may be!
For more datasets, click here.
- 🚨 Your notebook can be here! 🚨!
This dataset is perfect for providing insight into the performance of NBA rookies over an extended period of time. The data covers rookie stats from 1980 to 2016 and includes statistics such as points scored, field goals made, free throw percentage, offensive rebounds, defensive rebounds and assists. It also provides the name of each rookie along with the year they were drafted and their Hall of Fame class.
This data set is useful for researching how rookies’ stats have changed over time in order to compare different eras or identify trends in player performance. It can also be used to evaluate players by comparing their stats against those of other players or previous years’ stats.
In order to use this dataset effectively, a few tips are helpful:
Consider using Field Goal Percentage (FG%), Three Point Percentage (3P%) and Free Throw Percentage (FT%) to measure a player’s efficiency beyond just points scored or field goals made/attempted (FGM/FGA).
Lookout for anomalies such as low efficiency ratings despite high minutes played as this could indicate that either a player has not had enough playing time in order for their statistics to reach what would be per game average when playing more minutes or that they simply did not play well over that short period with limited opportunities.
Try different visualizations with the data such as histograms, line graphs and scatter plots because each may offer different insights into varied aspects of the data set like comparison between individual years vs aggregate trends over multiple years etc.
Lastly it is important keep in mind whether you're dealing with cumulative totals over multiple seasons versus looking at individual season averages or per game numbers when attempting analysis on these sets!
- Evaluating the performance of historical NBA rookies over time and how this can help inform future draft picks in the NBA.
- Analysing the relative importance of certain performance stats, such as three-point percentage, to overall success and Hall of Fame induction from 1980-2016.
- Comparing rookie seasons across different years to identify common trends in terms of statistical contributions and development over time
If you use this dataset in your research, please credit the original authors. Data Source
License: Dataset copyright by authors - You are free to: - Share - copy and redistribute the material in any medium or format for any purpose, even commercially. - Adapt - remix, transform, and build upon the material for any purpose, even commercially. - You must: - Give appropriate credit - Provide a link to the license, and indicate if changes were made. - ShareAlike - You must distribute your contributions under the same license as the original. - Keep intact - all notices that refer to this license, including copyright notices.
File: NBA Rookies by Year_Hall of Fame Class.csv | Column name | Description | |:-----------------------|:------------------------------------------------------------------| | Name | The name of...
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Abstract To break with the traditional model of Basic Statistics classes in Higher Education, we sought on Statistical Literacy and Critical Education to develop an activity about graphic interpretation, which took place in a Virtual Learning Environment (VLE), as a complement to classroom meetings. Twenty-three engineering students from a public higher education institution in Rio de Janeiro took part in the research. Our objective was to analyze elements of graphic comprehension in an activity that consisted of identifying incorrect statistical graphs, conveyed by the media, followed by argumentation and interaction among students about these errors. The main results evidenced that elements of the Graphic Sense were present in the discussions and were the goal of the students' critical analysis. The VLE was responsible for facilitating communication, fostering student participation, and linguistic writing, so the use of digital technologies and activities favored by collaboration and interaction are important for statistical development, but such construction is a gradual process.