This release contains experimental data on the pozzolanic of calcined coal gangue. "The data on the strength" presents the compressive and flexural strength data of cement mortar specimens (40×40×160 cm) containing 30% calcined coal gangue at different temperatures and curing times (3 days, 7 days, and 28 days). "Column chart of strength" visually represents the flexural and compressive strength data mentioned above in the form of bar charts, with temperature intervals on the x-axis and flexural strength and compressive strength on the y-axis. "R3 activity test data" displays the weights before and after calcination, along with the weight difference representing the combined water content measured through R3 activity testing. "The bar chart of R3 activity test" visually represents the combined water content in the form of bar charts, with temperature intervals on the x-axis and combined water content on the y-axis. Thermogravimetric data show the changes in TG and DTG concerning temperature(T). FTIR curve data at different temperatures include Wavenumber and absorbance values. XRD curve data display Degrees and Intensity, along with 80 scanning electron microscope images capturing different temperature coal gangue powder photos.
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
About Datasets: - Domain : Finance - Project: Bank loan of customers - Datasets: Finance_1.xlsx & Finance_2.xlsx - Dataset Type: Excel Data - Dataset Size: Each Excel file has 39k+ records
KPI's: 1. Year wise loan amount Stats 2. Grade and sub grade wise revol_bal 3. Total Payment for Verified Status Vs Total Payment for Non Verified Status 4. State wise loan status 5. Month wise loan status 6. Get more insights based on your understanding of the data
Process: 1. Understanding the problem 2. Data Collection 3. Data Cleaning 4. Exploring and analyzing the data 5. Interpreting the results
This data contains Power Query, Power Pivot, Merge data, Clustered Bar Chart, Clustered Column Chart, Line Chart, 3D Pie chart, Dashboard, slicers, timeline, formatting techniques.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Categorical scatterplots with R for biologists: a step-by-step guide
Benjamin Petre1, Aurore Coince2, Sophien Kamoun1
1 The Sainsbury Laboratory, Norwich, UK; 2 Earlham Institute, Norwich, UK
Weissgerber and colleagues (2015) recently stated that ‘as scientists, we urgently need to change our practices for presenting continuous data in small sample size studies’. They called for more scatterplot and boxplot representations in scientific papers, which ‘allow readers to critically evaluate continuous data’ (Weissgerber et al., 2015). In the Kamoun Lab at The Sainsbury Laboratory, we recently implemented a protocol to generate categorical scatterplots (Petre et al., 2016; Dagdas et al., 2016). Here we describe the three steps of this protocol: 1) formatting of the data set in a .csv file, 2) execution of the R script to generate the graph, and 3) export of the graph as a .pdf file.
Protocol
• Step 1: format the data set as a .csv file. Store the data in a three-column excel file as shown in Powerpoint slide. The first column ‘Replicate’ indicates the biological replicates. In the example, the month and year during which the replicate was performed is indicated. The second column ‘Condition’ indicates the conditions of the experiment (in the example, a wild type and two mutants called A and B). The third column ‘Value’ contains continuous values. Save the Excel file as a .csv file (File -> Save as -> in ‘File Format’, select .csv). This .csv file is the input file to import in R.
• Step 2: execute the R script (see Notes 1 and 2). Copy the script shown in Powerpoint slide and paste it in the R console. Execute the script. In the dialog box, select the input .csv file from step 1. The categorical scatterplot will appear in a separate window. Dots represent the values for each sample; colors indicate replicates. Boxplots are superimposed; black dots indicate outliers.
• Step 3: save the graph as a .pdf file. Shape the window at your convenience and save the graph as a .pdf file (File -> Save as). See Powerpoint slide for an example.
Notes
• Note 1: install the ggplot2 package. The R script requires the package ‘ggplot2’ to be installed. To install it, Packages & Data -> Package Installer -> enter ‘ggplot2’ in the Package Search space and click on ‘Get List’. Select ‘ggplot2’ in the Package column and click on ‘Install Selected’. Install all dependencies as well.
• Note 2: use a log scale for the y-axis. To use a log scale for the y-axis of the graph, use the command line below in place of command line #7 in the script.
replicates
graph + geom_boxplot(outlier.colour='black', colour='black') + geom_jitter(aes(col=Replicate)) + scale_y_log10() + theme_bw()
References
Dagdas YF, Belhaj K, Maqbool A, Chaparro-Garcia A, Pandey P, Petre B, et al. (2016) An effector of the Irish potato famine pathogen antagonizes a host autophagy cargo receptor. eLife 5:e10856.
Petre B, Saunders DGO, Sklenar J, Lorrain C, Krasileva KV, Win J, et al. (2016) Heterologous Expression Screens in Nicotiana benthamiana Identify a Candidate Effector of the Wheat Yellow Rust Pathogen that Associates with Processing Bodies. PLoS ONE 11(2):e0149035
Weissgerber TL, Milic NM, Winham SJ, Garovic VD (2015) Beyond Bar and Line Graphs: Time for a New Data Presentation Paradigm. PLoS Biol 13(4):e1002128
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains a list of 186 Digital Humanities projects leveraging information visualisation methods. Each project has been classified according to visualisation and interaction techniques, narrativity and narrative solutions, domain, methods for the representation of uncertainty and interpretation, and the employment of critical and custom approaches to visually represent humanities data.
The project_id
column contains unique internal identifiers assigned to each project. Meanwhile, the last_access
column records the most recent date (in DD/MM/YYYY format) on which each project was reviewed based on the web address specified in the url
column.
The remaining columns can be grouped into descriptive categories aimed at characterising projects according to different aspects:
Narrativity. It reports the presence of narratives employing information visualisation techniques. Here, the term narrative encompasses both author-driven linear data stories and more user-directed experiences where the narrative sequence is determined by user exploration [1]. We define 2 columns to identify projects using visualisation techniques in narrative, or non-narrative sections. Both conditions can be true for projects employing visualisations in both contexts. Columns:
non_narrative
(boolean)
narrative
(boolean)
Domain. The humanities domain to which the project is related. We rely on [2] and the chapters of the first part of [3] to abstract a set of general domains. Column:
domain
(categorical):
History and archaeology
Art and art history
Language and literature
Music and musicology
Multimedia and performing arts
Philosophy and religion
Other: both extra-list domains and cases of collections without a unique or specific thematic focus.
Visualisation of uncertainty and interpretation. Buiding upon the frameworks proposed by [4] and [5], a set of categories was identified, highlighting a distinction between precise and impressional communication of uncertainty. Precise methods explicitly represent quantifiable uncertainty such as missing, unknown, or uncertain data, precisely locating and categorising it using visual variables and positioning. Two sub-categories are interactive distinction, when uncertain data is not visually distinguishable from the rest of the data but can be dynamically isolated or included/excluded categorically through interaction techniques (usually filters); and visual distinction, when uncertainty visually “emerges” from the representation by means of dedicated glyphs and spatial or visual cues and variables. On the other hand, impressional methods communicate the constructed and situated nature of data [6], exposing the interpretative layer of the visualisation and indicating more abstract and unquantifiable uncertainty using graphical aids or interpretative metrics. Two sub-categories are: ambiguation, when the use of graphical expedients—like permeable glyph boundaries or broken lines—visually convey the ambiguity of a phenomenon; and interpretative metrics, when expressive, non-scientific, or non-punctual metrics are used to build a visualisation. Column:
uncertainty_interpretation
(categorical):
Interactive distinction
Visual distinction
Ambiguation
Interpretative metrics
Critical adaptation. We identify projects in which, with regards to at least a visualisation, the following criteria are fulfilled: 1) avoid repurposing of prepackaged, generic-use, or ready-made solutions; 2) being tailored and unique to reflect the peculiarities of the phenomena at hand; 3) avoid simplifications to embrace and depict complexity, promoting time-consuming visualisation-based inquiry. Column:
critical_adaptation
(boolean)
Non-temporal visualisation techniques. We adopt and partially adapt the terminology and definitions from [7]. A column is defined for each type of visualisation and accounts for its presence within a project, also including stacked layouts and more complex variations. Columns and inclusion criteria:
plot
(boolean): visual representations that map data points onto a two-dimensional coordinate system.
cluster_or_set
(bool): sets or cluster-based visualisations used to unveil possible inter-object similarities.
map
(boolean): geographical maps used to show spatial insights. While we do not specify the variants of maps (e.g., pin maps, dot density maps, flow maps, etc.), we make an exception for maps where each data point is represented by another visualisation (e.g., a map where each data point is a pie chart) by accounting for the presence of both in their respective columns.
network
(boolean): visual representations highlighting relational aspects through nodes connected by links or edges.
hierarchical_diagram
(boolean): tree-like structures such as tree diagrams, radial trees, but also dendrograms. They differ from networks for their strictly hierarchical structure and absence of closed connection loops.
treemap
(boolean): still hierarchical, but highlighting quantities expressed by means of area size. It also includes circle packing variants.
word_cloud
(boolean): clouds of words, where each instance’s size is proportional to its frequency in a related context
bars
(boolean): includes bar charts, histograms, and variants. It coincides with “bar charts” in [7] but with a more generic term to refer to all bar-based visualisations.
line_chart
(boolean): the display of information as sequential data points connected by straight-line segments.
area_chart
(boolean): similar to a line chart but with a filled area below the segments. It also includes density plots.
pie_chart
(boolean): circular graphs divided into slices which can also use multi-level solutions.
plot_3d
(boolean): plots that use a third dimension to encode an additional variable.
proportional_area
(boolean): representations used to compare values through area size. Typically, using circle- or square-like shapes.
other
(boolean): it includes all other types of non-temporal visualisations that do not fall into the aforementioned categories.
Temporal visualisations and encodings. In addition to non-temporal visualisations, a group of techniques to encode temporality is considered in order to enable comparisons with [7]. Columns:
timeline
(boolean): the display of a list of data points or spans in chronological order. They include timelines working either with a scale or simply displaying events in sequence. As in [7], we also include structured solutions resembling Gantt chart layouts.
temporal_dimension
(boolean): to report when time is mapped to any dimension of a visualisation, with the exclusion of timelines. We use the term “dimension” and not “axis” as in [7] as more appropriate for radial layouts or more complex representational choices.
animation
(boolean): temporality is perceived through an animation changing the visualisation according to time flow.
visual_variable
(boolean): another visual encoding strategy is used to represent any temporality-related variable (e.g., colour).
Interaction techniques. A set of categories to assess affordable interaction techniques based on the concept of user intent [8] and user-allowed data actions [9]. The following categories roughly match the “processing”, “mapping”, and “presentation” actions from [9] and the manipulative subset of methods of the “how” an interaction is performed in the conception of [10]. Only interactions that affect the visual representation or the aspect of data points, symbols, and glyphs are taken into consideration. Columns:
basic_selection
(boolean): the demarcation of an element either for the duration of the interaction or more permanently until the occurrence of another selection.
advanced_selection
(boolean): the demarcation involves both the selected element and connected elements within the visualisation or leads to brush and link effects across views. Basic selection is tacitly implied.
navigation
(boolean): interactions that allow moving, zooming, panning, rotating, and scrolling the view but only when applied to the visualisation and not to the web page. It also includes “drill” interactions (to navigate through different levels or portions of data detail, often generating a new view that replaces or accompanies the original) and “expand” interactions generating new perspectives on data by expanding and collapsing nodes.
arrangement
(boolean): methods to organise visualisation elements (symbols, glyphs, etc.) or multi-visualisation
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data for Figure SPM.4 from the Summary for Policymakers (SPM) of the Working Group I (WGI) Contribution to the Intergovernmental Panel on Climate Change (IPCC) Sixth Assessment Report (AR6).
Figure SPM.4 panel a shows global emissions projections for CO2 and a set of key non-CO2 climate drivers, for the core set of five IPCC AR6 scenarios. Figure SPM.4 panel b shows attributed warming in 2081-2100 relative to 1850-1900 for total anthropogenic, CO2, other greenhouse gases, and other anthropogenic forcings for five Shared Socio-economic Pathway (SSP) scenarios.
How to cite this dataset
When citing this dataset, please include both the data citation below (under 'Citable as') and the following citation for the report component from which the figure originates:
IPCC, 2021: Summary for Policymakers. In: Climate Change 2021: The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change [Masson-Delmotte, V., P. Zhai, A. Pirani, S.L. Connors, C. Péan, S. Berger, N. Caud, Y. Chen, L. Goldfarb, M.I. Gomis, M. Huang, K. Leitzell, E. Lonnoy, J.B.R. Matthews, T.K. Maycock, T. Waterfield, O. Yelekçi, R. Yu, and B. Zhou (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, pp. 3−32, doi:10.1017/9781009157896.001.
The figure has two panels, with data provided for all panels in subdirectories named panel_a and panel_b.
This dataset contains:
The five illustrative SSP (Shared Socio-economic Pathway) scenarios are described in Box SPM.1 of the Summary for Policymakers and Section 1.6.1.1 of Chapter 1.
Data provided in relation to figure
Panel a:
The first column includes the years, while the next columns include the data per scenario and per climate forcer for the line graphs.
Data file: Sulfur_dioxide_Mt SO2_yr.csv. relates to Sulfur dioxide emissions panel
Panel b:
Data file: ts_warming_ranges_1850-1900_base_panel_b.csv. [Rows 2 to 5 relate to the first bar chart (cyan). Rows 6 to 9 relate to the second bar chart (blue). Rows 10 to 13 relate to the third bar chart (orange). Rows 14 to 17 relate to the fourth bar chart (red). Rows 18 to 21 relate to the fifth bar chart (brown).].
Sources of additional information
The following weblink are provided in the Related Documents section of this catalogue record: - Link to the report webpage, which includes the report component containing the figure (Summary for Policymakers) and the Supplementary Material for Chapter 1, which contains details on the input data used in Table 1.SM.1..(Cross-Chapter Box 1.4, Figure 2). - Link to related publication for input data used in panel a.
By Homeland Infrastructure Foundation [source]
The Submarine Cables dataset provides a comprehensive collection of features related to submarine cables. It includes information such as the scale band, description, and effective dates of these cables. These data are specifically designed to support coastal planning at both regional and national scales.
The dataset is derived from 2010 NOAA Electronic Navigational Charts (ENCs), along with 2009 NOAA Raster Navigational Charts (RNCs) which were updated in 2013 using the most recent RNCs as a reference point. The source material's scale varied significantly, resulting in discontinuities between multiple sources that were resolved with minimal spatial adjustments.
Polyline features representing submarine cables were extracted from the original sources while excluding 'cable areas' noted within the data. The S-57 data model was modified for improved readability and performance purposes.
Overall, this dataset provides valuable information regarding the occurrence and characteristics of submarine cables in and around U.S. navigable waters. It serves as an essential resource for coastal planning efforts at various geographic scales
Here's a guide on how to effectively utilize this dataset:
1. Familiarize Yourself with the Columns
The dataset contains multiple columns that provide important information:
scaleBand
: This categorical column indicates the scale band of each submarine cable.description
: The text column provides a description of each submarine cable.effectiveDate
: Indicates the effective date of the information about each submarine cable.Understanding these columns will help you navigate and interpret the data effectively.
2. Explore Scale Bands
Start by analyzing the distribution of different scale bands in the dataset. The scale band categorizes submarines cables based on their size or capacity. Identifying patterns or trends within specific scale bands can provide valuable insights into how submarine cables are deployed.
For example, you could analyze which scale bands are most commonly used in certain regions or countries, helping coastal planners understand infrastructure needs and potential connectivity gaps.
3. Analyze Cable Descriptions
The description column provides detailed information about each submarine cable's characteristics, purpose, or intended use. By examining these descriptions, you can uncover specific attributes related to each cable.
This information can be crucial when evaluating potential impacts on marine ecosystems, identifying areas prone to damage or interference with other maritime activities, or understanding connectivity options for coastal regions.
4. Consider Effective Dates
While excluding dates from this analysis as per your request (as we exclude them here), effective dates play an important role in keeping track of when information about a particular cable was collected or updated.
By considering effective dates over time: - You can monitor changes in infrastructure deployment strategies. - Identify areas where new cables have been installed. - Track outdated infrastructure that may need replacements or upgrades.
5. Combine with Other Datasets
To gain a comprehensive understanding and unlock deeper insights, consider integrating this dataset with other relevant datasets. For example: - Population density data can help identify areas in high need of improved connectivity. - Coastal environmental data can help assess potential ecological impacts of submarine cables.
By merging datasets, you can explore relationships, draw correlations, and make more informed decisions based on the available information.
6. Visualize the Data
Create meaningful visualizations to better understand and communicate insights from the dataset. Utilize scatter plots, bar charts, heatmaps, or GIS maps
- Coastal Planning: The dataset can be used for coastal planning at both regional and national scales. By analyzing the submarine cable features, planners can assess the impact of these cables on coastal infrastructure development and design plans accordingly.
- Communication Network Analysis: The dataset can be utilized to analyze the connectivity and coverage of submarine cable networks. This information is valuable for telecommunications companies and network providers to understand gaps in communication infras...
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This comprehensive dataset presents the global refugee landscape by providing a detailed overview of refugee and displacement statistics from various countries and territories over a span of time. With a total of 107,980 rows and 11 columns, this dataset delves into the complexities of forced migration and human displacement, offering insights into the movements of refugees, asylum-seekers, internally displaced persons (IDPs), returned refugees and IDPs, stateless individuals, and other populations of concern.
Columns in the dataset:
Visualization Ideas: Time Series Analysis: Plot the trends in different refugee populations over the years, such as refugees, asylum-seekers, IDPs, returned refugees, etc. Geographic Analysis: Create heatmaps or choropleth maps to visualize refugee flows between different countries and regions. Origin and Destination Analysis: Show the top countries of origin and the top host countries for refugees using bar charts. Pie Charts: Visualize the distribution of different refugee populations (refugees, asylum-seekers, IDPs, etc.) as a percentage of the total population. Stacked Area Chart: Display the cumulative total of different refugee populations over time to observe changes and trends.
Data Modeling and Machine Learning Ideas: Time Series Forecasting: Use machine learning algorithms like ARIMA or LSTM to predict future refugee trends based on historical data. Clustering: Group countries based on similar refugee patterns using clustering algorithms such as K-Means or DBSCAN. Classification: Build a classification model to predict whether a country will experience a significant increase in refugee inflow based on historical and socio-political factors. Sentiment Analysis: Analyze social media or news data to determine the sentiment around refugee-related topics and how it correlates with migration patterns. Network Analysis: Construct a network graph to visualize the connections and interactions between countries in terms of refugee flows.
These visualization and modeling ideas can provide meaningful insights into the global refugee crisis and aid in decision-making, policy formulation, and humanitarian efforts.
By Throwback Thursday [source]
This dataset provides comprehensive information on injuries that occurred in the National Football League (NFL) during the period from 2012 to 2017. The dataset includes details such as the type of injury sustained by players, the specific situation or event that led to the injury, and the type of season (regular season or playoffs) during which each injury occurred.
The Injury Type column categorizes the various types of injuries suffered by players, providing insights into specific anatomical areas or specific conditions. For example, it may include injuries like concussions, ankle sprains, knee ligament tears, shoulder dislocations, and many others.
The Scenario column offers further granularity by describing the specific situation or event that caused each injury. It can provide context about whether an injury happened during a tackle, collision with another player or object on field (such as goalposts), blocking maneuvers gone wrong, falls to the ground resulting from being off-balance while making plays, and other possible scenarios leading to player harm.
The Season Type column classifies when exactly each injury occurred within a particular year. It differentiates between regular season games and playoff matches – identifying whether an incident took place during high-stakes postseason competition or routine games throughout the regular season.
The Injuries column represents numeric data detailing how many times a particular combination of year-injury type-scenario-season type has occurred within this dataset's timeframe – measuring both occurrence frequency and severity for each unique combination.
Overall, this extensive dataset provides valuable insight into NFL injuries over a six-year span. By understanding which types of injuries are most prevalent under certain scenarios and during different seasons of play - such as regular seasons versus playoffs - stakeholders within professional football can identify potential areas for improvement in safety measures and develop strategies aimed at reducing player harm on-field
The dataset contains six columns:
Year: This column represents the year in which the injury occurred. It allows you to filter and analyze data based on specific years.
Injury Type: This column indicates the specific type of injury sustained by players. It includes various categories such as concussions, fractures, sprains, strains, etc.
Scenario: The scenario column describes the situation or event that led to each injury. It provides context for understanding how injuries occur during football games.
Season Type: This column categorizes injuries based on whether they occurred during regular season games or playoff games.
Injuries: The number of injuries recorded for each specific combination of year, injury type, scenario, and season type is mentioned in this column's numeric values.
Using this dataset effectively involves several steps:
Data Exploration: Start by examining all available columns carefully and making note of their meanings and data types (categorical or numeric).
Filtering Data by Year or Season Type: If you are interested in analyzing injuries during a particular year(s) or specific seasons (regular vs playoffs), apply filters accordingly using either one or both these columns respectively.
3a. Analyzing Injury Types: To gain insights into different types of reported injuries over time periods specified by your filters (e.g., a given year), group data based on Injury Type and calculate aggregate statistics like maximum occurrences or average frequency across years/seaso
3b.Scenario-based Analysis:/frequency across years/seasons. Group the data based on Scenario and calculate aggregate values to determine which situations or events lead to more injuries.
Exploring Injury Trends: Explore the overall trend of injuries throughout the 2012-2017 period to identify any significant patterns, spikes, or declines in injury occurrence.
Visualizing Data: Utilize appropriate visualization techniques such as bar graphs, line charts, or pie charts to present your findings effectively. These visualizations will help you communicate your analysis concisely and provide clear insights into both common injuries and specific scenarios.
Drawing Conclusions: Based on your analysis of the
- Understanding trends in NFL injuries: This dataset can be used to analyze the number and types of in...
By Gove Allen [source]
The Paintings Dataset is a rich and diverse collection of various paintings from different artists spanning across multiple time periods. It includes a wide range of art styles, techniques, and subjects, providing an extensive resource for art enthusiasts, historians, researchers, and anyone interested in exploring the world of visual arts.
This dataset aims to capture the essence of artistic expression through its vast array of paintings. From classical masterpieces to contemporary works, it offers a comprehensive perspective on the evolution of artistic creativity throughout history.
Each record in this dataset represents an individual painting with detailed information such as artist's name, artwork title (if applicable), genre/style classification (e.g., landscape, portrait), medium (e.g., oil on canvas), dimensions (height and width), and provenance details if available. Additionally, some records may include additional metadata like the year or era in which the artwork was created.
By providing such comprehensive data about each painting included within this dataset, it enables users to study various aspects of art history. Researchers can analyze trends across different time periods or explore specific artistic movements by filtering the dataset based on genre or style categories. Art enthusiasts can also use this dataset to discover new artists or artworks that align with their interests.
This valuable collection appeals not only to those seeking knowledge or inspiration from renowned artworks but also encourages exploration into lesser-known pieces that may have been overlooked in mainstream discourse. It fosters engagement with cultural heritage while promoting diversity and inclusivity within the realm of visual arts.
Whether you are interested in studying classical works by universally acclaimed painters like Leonardo da Vinci or exploring modern expressions by emerging contemporary artists—this Paintings Dataset has something for everyone who appreciates aesthetics and enjoys unraveling stories through brushstrokes on canvas
How to Use the Paintings Dataset
Welcome to the Paintings Dataset! This dataset is a comprehensive collection of various paintings from different artists and time periods. It contains information about the artist, title, genre, style, and medium of each painting. Whether you are an art enthusiast, researcher, or just curious about paintings, this guide will help you navigate through this dataset easily.
1. Understanding the Columns
This dataset consists of several columns that provide detailed information about each painting. Here is a brief description of each column:
- Artist: The name of the artist who created the painting.
- Title: The title or name given to the artwork by the artist.
- Genre: The artistic category or subject matter depicted in the painting.
- Style: The specific artistic style or movement associated with the painting.
- Medium: The materials and techniques used by the artist to create the artwork.
2. Exploring Artists and Their Paintings
One interesting way to use this dataset is to explore individual artists and their artworks. You can filter by a specific artist's name in order to retrieve all their paintings included in this collection.
For example: If you are interested in exploring all paintings by Leonardo da Vinci, simply filter using Leonardo da Vinci in Artist column using your preferred data analysis tool.
3. Analyzing Painting Genres
The genre column allows you to analyze different categories within this collection of paintings. You can examine popular genres or compare them across different eras.
To analyze genres: - Get unique values for Genre column. - Count frequency for each genre value. - Visualize results using bar charts or other graphical representations.
You might discover which genres were more predominant during certain periods or which artists were known for specific subjects!
4. Investigating Artistic Styles
Similar to genres, artistic styles also play an essential role in the world of painting. This dataset includes various styles like Impressionism, Cubism, Realism, etc. By analyzing the artistic styles column, you can explore trends and shifts in artistic movements.
To investigate styles: - Get unique values for Style column. - Count frequency for each style value. - Visualize results using bar charts or other graphical...
Not seeing a result you expected?
Learn how you can add new datasets to our index.
This release contains experimental data on the pozzolanic of calcined coal gangue. "The data on the strength" presents the compressive and flexural strength data of cement mortar specimens (40×40×160 cm) containing 30% calcined coal gangue at different temperatures and curing times (3 days, 7 days, and 28 days). "Column chart of strength" visually represents the flexural and compressive strength data mentioned above in the form of bar charts, with temperature intervals on the x-axis and flexural strength and compressive strength on the y-axis. "R3 activity test data" displays the weights before and after calcination, along with the weight difference representing the combined water content measured through R3 activity testing. "The bar chart of R3 activity test" visually represents the combined water content in the form of bar charts, with temperature intervals on the x-axis and combined water content on the y-axis. Thermogravimetric data show the changes in TG and DTG concerning temperature(T). FTIR curve data at different temperatures include Wavenumber and absorbance values. XRD curve data display Degrees and Intensity, along with 80 scanning electron microscope images capturing different temperature coal gangue powder photos.