CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Time-Series Matrix (TSMx): A visualization tool for plotting multiscale temporal trends TSMx is an R script that was developed to facilitate multi-temporal-scale visualizations of time-series data. The script requires only a two-column CSV of years and values to plot the slope of the linear regression line for all possible year combinations from the supplied temporal range. The outputs include a time-series matrix showing slope direction based on the linear regression, slope values plotted with colors indicating magnitude, and results of a Mann-Kendall test. The start year is indicated on the y-axis and the end year is indicated on the x-axis. In the example below, the cell in the top-right corner is the direction of the slope for the temporal range 2001–2019. The red line corresponds with the temporal range 2010–2019 and an arrow is drawn from the cell that represents that range. One cell is highlighted with a black border to demonstrate how to read the chart—that cell represents the slope for the temporal range 2004–2014. This publication entry also includes an excel template that produces the same visualizations without a need to interact with any code, though minor modifications will need to be made to accommodate year ranges other than what is provided. TSMx for R was developed by Georgios Boumis; TSMx was originally conceptualized and created by Brad G. Peter in Microsoft Excel. Please refer to the associated publication: Peter, B.G., Messina, J.P., Breeze, V., Fung, C.Y., Kapoor, A. and Fan, P., 2024. Perspectives on modifiable spatiotemporal unit problems in remote sensing of agriculture: evaluating rice production in Vietnam and tools for analysis. Frontiers in Remote Sensing, 5, p.1042624. https://www.frontiersin.org/journals/remote-sensing/articles/10.3389/frsen.2024.1042624 TSMx sample chart from the supplied Excel template. Data represent the productivity of rice agriculture in Vietnam as measured via EVI (enhanced vegetation index) from the NASA MODIS data product (MOD13Q1.V006). TSMx R script: # import packages library(dplyr) library(readr) library(ggplot2) library(tibble) library(tidyr) library(forcats) library(Kendall) options(warn = -1) # disable warnings # read data (.csv file with "Year" and "Value" columns) data <- read_csv("EVI.csv") # prepare row/column names for output matrices years <- data %>% pull("Year") r.names <- years[-length(years)] c.names <- years[-1] years <- years[-length(years)] # initialize output matrices sign.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) pval.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) slope.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) # function to return remaining years given a start year getRemain <- function(start.year) { years <- data %>% pull("Year") start.ind <- which(data[["Year"]] == start.year) + 1 remain <- years[start.ind:length(years)] return (remain) } # function to subset data for a start/end year combination splitData <- function(end.year, start.year) { keep <- which(data[['Year']] >= start.year & data[['Year']] <= end.year) batch <- data[keep,] return(batch) } # function to fit linear regression and return slope direction fitReg <- function(batch) { trend <- lm(Value ~ Year, data = batch) slope <- coefficients(trend)[[2]] return(sign(slope)) } # function to fit linear regression and return slope magnitude fitRegv2 <- function(batch) { trend <- lm(Value ~ Year, data = batch) slope <- coefficients(trend)[[2]] return(slope) } # function to implement Mann-Kendall (MK) trend test and return significance # the test is implemented only for n>=8 getMann <- function(batch) { if (nrow(batch) >= 8) { mk <- MannKendall(batch[['Value']]) pval <- mk[['sl']] } else { pval <- NA } return(pval) } # function to return slope direction for all combinations given a start year getSign <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) signs <- lapply(combs, fitReg) return(signs) } # function to return MK significance for all combinations given a start year getPval <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) pvals <- lapply(combs, getMann) return(pvals) } # function to return slope magnitude for all combinations given a start year getMagn <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) magns <- lapply(combs, fitRegv2) return(magns) } # retrieve slope direction, MK significance, and slope magnitude signs <- lapply(years, getSign) pvals <- lapply(years, getPval) magns <- lapply(years, getMagn) # fill-in output matrices dimension <- nrow(sign.matrix) for (i in 1:dimension) { sign.matrix[i, i:dimension] <- unlist(signs[i]) pval.matrix[i, i:dimension] <- unlist(pvals[i]) slope.matrix[i, i:dimension] <- unlist(magns[i]) } sign.matrix <-...
Study data of comparative study of visualizations for multiple time series including anonymized participant data from Prolific, data set generation scripts, source code for the study framework, and analysis scripts. This repository also serves as supplemental material for the publication titled "A Comparative Study of Visualizations for Multiple Time Series", presented at IVAPP 2022. The goal of the study was to get insight about how well three visualization techniques for multiple time series (line charts, stream graphs, and aligned area charts) can be understood to solve three basic tasks: Deciding which time series has the highest value at a time, deciding which time series has the highest value over all time steps (area under the graph), and deciding at which of two time points the sum of all time series is the largest. The study was performed online on the Prolific platform with 51 participants. Each participant was shown at least 108 stimuli. Measured data for each participant is mainly which stimuli they gave the correct answer to, and how long they took. For more information about the data, please consult the paper and the README.txt.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Figures in scientific publications are critically important because they often show the data supporting key findings. Our systematic review of research articles published in top physiology journals (n = 703) suggests that, as scientists, we urgently need to change our practices for presenting continuous data in small sample size studies. Papers rarely included scatterplots, box plots, and histograms that allow readers to critically evaluate continuous data. Most papers presented continuous data in bar and line graphs. This is problematic, as many different data distributions can lead to the same bar or line graph. The full data may suggest different conclusions from the summary statistics. We recommend training investigators in data presentation, encouraging a more complete presentation of data, and changing journal editorial policies. Investigators can quickly make univariate scatterplots for small sample size studies using our Excel templates.
Use the Chart Viewer template to display bar charts, line charts, pie charts, histograms, and scatterplots to complement a map. Include multiple charts to view with a map or side by side with other charts for comparison. Up to three charts can be viewed side by side or stacked, but you can access and view all the charts that are authored in the map. Examples: Present a bar chart representing average property value by county for a given area. Compare charts based on multiple population statistics in your dataset. Display an interactive scatterplot based on two values in your dataset along with an essential set of map exploration tools. Data requirements The Chart Viewer template requires a map with at least one chart configured. Key app capabilities Multiple layout options - Choose Stack to display charts stacked with the map, or choose Side by side to display charts side by side with the map. Manage chart - Reorder, rename, or turn charts on and off in the app. Multiselect chart - Compare two charts in the panel at the same time. Bookmarks - Allow users to zoom and pan to a collection of preset extents that are saved in the map. Home, Zoom controls, Legend, Layer List, Search Supportability This web app is designed responsively to be used in browsers on desktops, mobile phones, and tablets. We are committed to ongoing efforts towards making our apps as accessible as possible. Please feel free to leave a comment on how we can improve the accessibility of our apps for those who use assistive technologies.
U.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
An aggregated, multi-agency data set with information on college degrees, statewide degree attainment, average school facility age, school facility maintenance evaluation, and Minority Business Enterprise (MBE) and Small-Business Reserve (SBR) program participation. The data set contains data from the Maryland Higher Education Commission (MHEC), the US Census Bureau, the Inter-Agency Council on School Construction (IAC), and the Governor's Office of Minority Affairs (GOMA)
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Model summary for the multiple linear regression model with percentage as the dependent variable.
U.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
An aggregated, multi-agency data set with information on college degrees, statewide degree attainment, average school facility age, school facility maintenance evaluation, and Minority Business Enterprise (MBE) and Small-Business Reserve (SBR) program participation. The data set contains data from the Maryland Higher Education Commission (MHEC), the US Census Bureau, the Inter-Agency Council on School Construction (IAC), and the Governor's Office of Minority Affairs (GOMA)
The Basic Charts extension for CKAN provides the ability to display data from DataStore resources as interactive charts. This extension adds several chart types to the resource view options within CKAN, visualizing data in accessible formats. Utilizing the Flot Charts JavaScript library, it offers compatibility across a wide range of browsers, including older versions like IE6+. Key Features: Line Chart Visualization: Creates line charts to display trends and relationships between data points over a specified axis, suitable for analyzing changes over time or continuous variables.
https://www.wiseguyreports.com/pages/privacy-policyhttps://www.wiseguyreports.com/pages/privacy-policy
BASE YEAR | 2024 |
HISTORICAL DATA | 2019 - 2024 |
REPORT COVERAGE | Revenue Forecast, Competitive Landscape, Growth Factors, and Trends |
MARKET SIZE 2023 | 12.43(USD Billion) |
MARKET SIZE 2024 | 13.06(USD Billion) |
MARKET SIZE 2032 | 19.4(USD Billion) |
SEGMENTS COVERED | Projection Technology ,Mounting Type ,Chart Format ,Purpose ,Regional |
COUNTRIES COVERED | North America, Europe, APAC, South America, MEA |
KEY MARKET DYNAMICS | Technological advancements increased prevalence of eye disorders growing awareness of eye care government initiatives increasing healthcare expenditure |
MARKET FORECAST UNITS | USD Billion |
KEY COMPANIES PROFILED | Huvitz ,Nanjing Qinhuan Optical Technology ,Nidek ,Beijing Jingu Instrument ,Reichert ,Suzhou Bon Optoelectronic Technology ,Welvision ,Changchun Jinghua Optics ,Oculus Optikgeräte ,HaagStreit Holding ,Wuhan Vision Group ,Wenzhou Kang Hua Electronic ,Topcon Medical Systems ,Shanghai Novel Optics ,Sino United Optics |
MARKET FORECAST PERIOD | 2025 - 2032 |
KEY MARKET OPPORTUNITIES | Growth in the ophthalmology sector Rising demand for early detection of eye diseases Technological advancements in eye chart projectors Increased focus on preventive eye care Expansion into emerging markets |
COMPOUND ANNUAL GROWTH RATE (CAGR) | 5.07% (2025 - 2032) |
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Cash-and-Equivalents Time Series for Apple Inc. Apple Inc. designs, manufactures, and markets smartphones, personal computers, tablets, wearables, and accessories worldwide. The company offers iPhone, a line of smartphones; Mac, a line of personal computers; iPad, a line of multi-purpose tablets; and wearables, home, and accessories comprising AirPods, Apple TV, Apple Watch, Beats products, and HomePod. It also provides AppleCare support and cloud services; and operates various platforms, including the App Store that allow customers to discover and download applications and digital content, such as books, music, video, games, and podcasts, as well as advertising services include third-party licensing arrangements and its own advertising platforms. In addition, the company offers various subscription-based services, such as Apple Arcade, a game subscription service; Apple Fitness+, a personalized fitness service; Apple Music, which offers users a curated listening experience with on-demand radio stations; Apple News+, a subscription news and magazine service; Apple TV+, which offers exclusive original content; Apple Card, a co-branded credit card; and Apple Pay, a cashless payment service, as well as licenses its intellectual property. The company serves consumers, and small and mid-sized businesses; and the education, enterprise, and government markets. It distributes third-party applications for its products through the App Store. The company also sells its products through its retail and online stores, and direct sales force; and third-party cellular network carriers, wholesalers, retailers, and resellers. Apple Inc. was founded in 1976 and is headquartered in Cupertino, California.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The data is organized in a structured manner, where each folder corresponds to a figure in the main text, and SI. Within each folder, sub-folders are used to represent sub-figures. For line graph data, we adopt the multi-column text (txt) format. We use Origin software to create these line graphs as it provides high-quality and accurate visualizations.
As for other types of data, such as those for non-line graphs, they are stored in common image formats, including PNG, JPG, and TIF. This way, the data can be easily accessed, viewed, and analyzed by other researchers interested in replicating or building upon our work.
The Charts extension for CKAN enhances the platform's data visualization capabilities, allowing users to create, manage, and share charts that are linked to CKAN datasets. It allows users to create interactive and visually appealing chart representations of data directly within the CKAN environment, providing essential data analysis tools. This streamlines the process of visualizing data for a more intuitive and accessible experience. Key Features: Chart Creation: Enables users to create charts directly from CKAN datasets. Chart Editing: Allows users to modify and customize existing charts. Chart Embedding: Provides the ability to embed created charts into other web applications or platforms for wider dissemination. Chart Sharing: Supports sharing of chart visualizations with other users or groups within or outside the CKAN ecosystem. Multiple Chart Types: Supports a variety of common chart types, including bar charts, line charts, and pie charts. Further chart types are not mentioned explicitly, but it is implied the extension can be extended as well. Technical Integration: The extension integrates with CKAN primarily as a plugin. To enable the Charts extension, the chartsview and chartsbuilderview plugins must be added to the CKAN configuration file. The documentation also mentions the need to set CHARTS_FIELDS when autogenerating documentation for chart types fields, which implies a level of customization and extensibility for different chart types. It requires proper initialization of the CKAN instance and relies on validators and helpers, emphasizing the need for a correctly configured CKAN environment. Benefits & Impact: The primary benefit of the CKAN Charts extension is the enhancement of data analysis and presentation capabilities within CKAN. By providing tools to create, manage, and share charts, the extension makes it easier for users to understand and communicate insights from their data, fostering better data-driven decision-making. Also the documentation for chart types can be autogenerated.
The Graph extension for CKAN adds the ability to visualize data resources as graphs, providing users with a more intuitive understanding of the information contained within datasets. It currently supports temporal and categorical graph types, enabling the creation of count-based visualizations over time or across different categories. While the current version is primarily designed for use with an Elasticsearch backend within the Natural History Museum's infrastructure, it is built to be extensible for broader applicability. Key Features: Temporal Graphs: Generates line graphs that display counts of data points over time, based on a designated date field within the resource. This allows to visualize trends and patterns dynamically. Categorical Graphs: Creates bar charts that show the distribution of counts for various values found within a specified field in a resource, making it easier to understand data groupings. Extensible Backend Architecture: Designed to support multiple backend data storage options, with Elasticsearch currently implemented, paving the way for future integration with other systems like PostgreSQL. Template Customization: Includes a template (templates/graph/view.html) that can be extended to override or add custom content to the graph view, giving full control over the visualization design. Configuration Options: Backend selection through the .ini configuration file. Users can choose between Elasticsearch or SQL, allowing administrators to align the extension with their specific requirements. Technical Integration: The Graph extension integrates with CKAN by adding a new view option to resources. Once enabled, the graph view will appear as an available option alongside existing resource viewers. The configuration requires modifying the CKAN .ini file to add 'graph' to the list of enabled plugins and setting the desired backend. The template templates/graph/view.html allows for full customization of the view. Benefits & Impact: The Graph extension enhances the usability of CKAN-managed datasets by providing interactive visualizations of data. Temporal graphs help users identify time-based trends, while categorical graphs illustrate data distribution. The extensible architecture ensures that the extension can be adapted to different data storage systems, improving its versatility. By providing a graphical representation of data, this extension makes it easier to understand complex information, benefiting both data providers and consumers.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Patient-drug-disease (PDD) Graph dataset, utilising Electronic medical records (EMRS) and biomedical Knowledge graphs. The novel framework to construct the PDD graph is described in the associated publication.PDD is an RDF graph consisting of PDD facts, where a PDD fact is represented by an RDF triple to indicate that a patient takes a drug or a patient is diagnosed with a disease. For instance, (pdd:274671, pdd:diagnosed, sepsis)Data files are in .nt N-Triple format, a line-based syntax for an RDF graph. These can be accessed via openly-available text edit software.diagnose_icd_information.nt - contains RDF triples mapping patients to diagnoses. For example:(pdd:18740, pdd:diagnosed, icd99592),where pdd:18740 is a patient entity, and icd99592 is the ICD-9 code of sepsis.drug_patients.nt- contains RDF triples mapping patients to drugs. For example:(pdd:18740, pdd:prescribed, aspirin),where pdd:18740 is a patient entity, and aspirin is the drug's name.Background:Electronic medical records contain multi-format electronic medical data that consist of an abundance of medical knowledge. Faced with patients' symptoms, experienced caregivers make the right medical decisions based on their professional knowledge, which accurately grasps relationships between symptoms, diagnoses and corresponding treatments. In the associated paper, we aim to capture these relationships by constructing a large and high-quality heterogenous graph linking patients, diseases, and drugs (PDD) in EMRs. Specifically, we propose a novel framework to extract important medical entities from MIMIC-III (Medical Information Mart for Intensive Care III) and automatically link them with the existing biomedical knowledge graphs, including ICD-9 ontology and DrugBank. The PDD graph presented in this paper is accessible on the Web via the SPARQL endpoint as well as in .nt format in this repository, and provides a pathway for medical discovery and applications, such as effective treatment recommendations.De-identificationIt is necessary to mention that MIMIC-III contains clinical information of patients. Although the protected health information was de-identifed, researchers who seek to use more clinical data should complete an on-line training course and then apply for the permission to download the complete MIMIC-III dataset: https://mimic.physionet.org/
Displays a line chart where a single or multiple notifiable diseases (up to 6) may be selected within any selected year range from 1924 up to 2016, where data are available. The chart can represent either the number or rate of reported cases. The source data table, limitations of the data and descriptions for the selected notifiable disease(s) are also provided.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This dataset contains, up to isomorphism, all (15_4,20_3) and (15_5,25_3) configurations, all (16_6,32_3) configurations with nontrivial automorphisms, as well as all 4-regular graphs on 15 vertices, 6-regular graphs on 15 vertices, 3-regular graphs on 16 vertices, and 4-regular graphs on 17 vertices. The configurations uniquely give regular linear spaces with parameters (15|2^45,3^20), (15|2^30,3^25), and (16|2^24,3^32). All files are compressed with gzip.
The dataset supplements the publication "On the Regular Linear Spaces up to Order 16" by Anton Betten, Dieter Betten, Daniel Heinlein, and Patric R. J. Östergård.
In the files containing configurations, each line is a configuration with the syntax
Example:
Assuming a total of 15 points labeled with {0,...,14}, the characteristic vector of a block {1,3,14} is
(0)100|0000|0000|1010
The first bit is padding as each hexadecimal number encodes four bits. Vertical bars designate groups of four bits. Consequently, the block is encoded as
400a
The following example shows the first line of one of the files:
$ zcat conf_15_4_20_3.txt.gz | head -n1
15 20 1081 4101 2201 0c01 0026 004a 0092 4402 008c 0054 0a04 0038 2108 1110 0160 0620 08c0 5200 3400 6800 A1
For the files containing graphs, we apply the graph6 file format but we extend each line by the corresponding number of automorphisms as described for configurations above, without the letter A. Programs for manipulating graphs in the graph6 format can be found in the gtools package that comes with the graph isomorphism program nauty (https://pallini.di.uniroma1.it/). Details regarding the graph6 format can be found in the documentation of nauty (https://pallini.di.uniroma1.it/Guide.html).
For graphs with a most 62 vertices, which holds in all cases here, a line in graph6 format is the ASCII converted equivalent of
Example:
Assume a graph with 5 vertices and edges: 02, 04, 13, 34 (the path 2-0-4-3-1), which has the adjacency matrix
00101
00010
10000
01001
10010
Hence, the upper triangle read column-wise is
0100101001
After padding we get
010010100100
and after grouping
010010|100100
Converting to decimal and adding 63 gives
63+16+2|63+32+4
that is
81|99
The number of vertices is 5, so we prepend 5+63=68:
68 81 99
The line in graph6 format is therefore
DQc
and our nonstandard appending of the order of the automorphism group gives
DQc 2
The first line of one of the files is as follows:
$ zcat graph_15_4.txt.gz | head -n1
Ns_???BAwjDoTOY_M_? 2
The orders of the automorphism groups and the numbers of isomorphism classes are as follows. The (up to isomorphism) 114711393113 (16_6,32_3) regular linear spaces with no nontrivial automorphisms are not stored.
(15_4,20_3) | (15_5,25_3) | (16_6,32_3) | |
---|---|---|---|
1 | 251712191 | 1442354689 | 114711393113 |
2 | 94229 | 180367 | 1125379 |
3 | 1129 | 2178 | 17287 |
4 | 915 | 936 | 3054 |
5 | 29 | 33 | |
6 | 142 | 180 | 240 |
8 | 85 | 36 | 50 |
9 | 4 | ||
10 | 4 | 4 | |
12 | 10 | 13 | 30 |
15 | 1 | ||
16 | 7 | 3 | |
18 | 4 | 3 | 2 |
20 | 2 | 2 | |
24 | 10 | 5 | 2 |
30 | 1 | ||
32 | 1 | ||
36 | 4 | 2 | |
40 | 2 | 1 | |
48 | 4 | 1 | |
72 | 1 | ||
96 | 1 | ||
120 | 1 | ||
600 | 1 | ||
720 | 1 | ||
total | 251808770 | 1442538454 | 114712539165 |
4-regular graphs with 15 vertices | 6-regular graphs with 15 vertices | 3-regular graphs with 16 vertices | 4-regular graphs with 17 vertices | |
---|---|---|---|---|
1 | 656794 | 1396131168 | 1547 | 76356249 |
2 | 119881 | 69928313 | 1261 | 8665624 |
3 | 17 | 630 | 2 | 127 |
4 | 21500 | 3848635 | 667 | 997704 |
5 | 14 | |||
6 | 409 | 55060 | 15 | 27213 |
8 | 4789 | 274294 | 330 | 131662 |
10 | 10 | 35 | ||
12 | 352 | 21334 | 11 | 12577 |
14 | 4 | |||
16 | 1020 | 23435 | 147 | 19786 |
18 | 1 | 10 | 2 | |
20 | 7 | 12 | ||
24 | 210 | 5596 | 11 | 4344 |
28 | 18 | |||
30 | 4 | 7 | ||
32 | 243 | 2463 | 51 | 3320 |
34 | 3 | |||
36 | 1 | 128 | 53 | |
48 | 106 | 1453 | 33 | 1500 |
56 | 1 | 15 | ||
60 | 2 | 2 | ||
64 | 54 | 285 | 16 | 639 |
68 | 1 | |||
72 | 6 | 165 | 2 | 96 |
96 | 41 | 309 | 24 | 504 |
112 | 7 | |||
120 | 5 | 692 | ||
128 | 10 | 48 | 4 | 132 |
140 | 1 | |||
144 | 10 | 74 | 3 | 82 |
168 | 1 | 1 | ||
192 | 14 | 77 | 20 | 193 |
216 | 2 | 3 | ||
224 | 2 | 6 | ||
240 | 18 | 1 | 2 | 497 |
256 | 1 | 6 | 1 | 24 |
280 | 1 | |||
288 | 5 | 36 | 9 | 53 |
320 | 4 | |||
384 | 6 | 26 | 11 | 58 |
432 | 9 | 3 | 2 | |
448 |
Open Database License (ODbL) v1.0https://www.opendatacommons.org/licenses/odbl/1.0/
License information was derived automatically
This indicator shows how many days per year were assessed to have air quality that was worse than “moderate” in Champaign County, according to the U.S. Environmental Protection Agency’s (U.S. EPA) Air Quality Index Reports. The period of analysis is 1980-2024, and the U.S. EPA’s air quality ratings analyzed here are as follows, from best to worst: “good,” “moderate,” “unhealthy for sensitive groups,” “unhealthy,” “very unhealthy,” and "hazardous."[1]
In 2024, the number of days rated to have air quality worse than moderate was 0. This is a significant decrease from the 13 days in 2023 in the same category, the highest in the 21st century. That figure is likely due to the air pollution created by the unprecedented Canadian wildfire smoke in Summer 2023.
While there has been no consistent year-to-year trend in the number of days per year rated to have air quality worse than moderate, the number of days in peak years had decreased from 2000 through 2022. Where peak years before 2000 had between one and two dozen days with air quality worse than moderate (e.g., 1983, 18 days; 1988, 23 days; 1994, 17 days; 1999, 24 days), the year with the greatest number of days with air quality worse than moderate from 2000-2022 was 2002, with 10 days. There were several years between 2006 and 2022 that had no days with air quality worse than moderate.
This data is sourced from the U.S. EPA’s Air Quality Index Reports. The reports are released annually, and our period of analysis is 1980-2024. The Air Quality Index Report websites does caution that "[a]ir pollution levels measured at a particular monitoring site are not necessarily representative of the air quality for an entire county or urban area," and recommends that data users do not compare air quality between different locations[2].
[1] Environmental Protection Agency. (1980-2024). Air Quality Index Reports. (Accessed 13 June 2025).
[2] Ibid.
Source: Environmental Protection Agency. (1980-2024). Air Quality Index Reports. https://www.epa.gov/outdoor-air-quality-data/air-quality-index-report. (Accessed 13 June 2025).
A Snellen chart is an eye chart that can be used to measure visual acuity. The Snellen chart is printed with eleven lines of block letters. The first line consists of one very large letter, which may be one of several letters, for example E, H, or N. Subsequent rows have increasing numbers of letters that decrease in size. A person taking the test covers one eye from 6 metres/20 feet away, and reads aloud the letters of each row, beginning at the top. The smallest row that can be read accurately indicates the visual acuity in that specific eye. In NTR, the Snellen chart was tested at the MRI scanner.
https://fred.stlouisfed.org/legal/#copyright-public-domainhttps://fred.stlouisfed.org/legal/#copyright-public-domain
Graph and download economic data for Crude Birth Rate for the United States (SPDYNCBRTINUSA) from 1960 to 2023 about birth, crude, rate, and USA.
Open Government Licence - Canada 2.0https://open.canada.ca/en/open-government-licence-canada
License information was derived automatically
Displays a line chart where a single or multiple notifiable diseases (up to 6) may be selected within any selected year range from 1924 up to 2016, where data are available. The chart can represent either the number or rate of reported cases. The source data table, limitations of the data and descriptions for the selected notifiable disease(s) are also provided.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Time-Series Matrix (TSMx): A visualization tool for plotting multiscale temporal trends TSMx is an R script that was developed to facilitate multi-temporal-scale visualizations of time-series data. The script requires only a two-column CSV of years and values to plot the slope of the linear regression line for all possible year combinations from the supplied temporal range. The outputs include a time-series matrix showing slope direction based on the linear regression, slope values plotted with colors indicating magnitude, and results of a Mann-Kendall test. The start year is indicated on the y-axis and the end year is indicated on the x-axis. In the example below, the cell in the top-right corner is the direction of the slope for the temporal range 2001–2019. The red line corresponds with the temporal range 2010–2019 and an arrow is drawn from the cell that represents that range. One cell is highlighted with a black border to demonstrate how to read the chart—that cell represents the slope for the temporal range 2004–2014. This publication entry also includes an excel template that produces the same visualizations without a need to interact with any code, though minor modifications will need to be made to accommodate year ranges other than what is provided. TSMx for R was developed by Georgios Boumis; TSMx was originally conceptualized and created by Brad G. Peter in Microsoft Excel. Please refer to the associated publication: Peter, B.G., Messina, J.P., Breeze, V., Fung, C.Y., Kapoor, A. and Fan, P., 2024. Perspectives on modifiable spatiotemporal unit problems in remote sensing of agriculture: evaluating rice production in Vietnam and tools for analysis. Frontiers in Remote Sensing, 5, p.1042624. https://www.frontiersin.org/journals/remote-sensing/articles/10.3389/frsen.2024.1042624 TSMx sample chart from the supplied Excel template. Data represent the productivity of rice agriculture in Vietnam as measured via EVI (enhanced vegetation index) from the NASA MODIS data product (MOD13Q1.V006). TSMx R script: # import packages library(dplyr) library(readr) library(ggplot2) library(tibble) library(tidyr) library(forcats) library(Kendall) options(warn = -1) # disable warnings # read data (.csv file with "Year" and "Value" columns) data <- read_csv("EVI.csv") # prepare row/column names for output matrices years <- data %>% pull("Year") r.names <- years[-length(years)] c.names <- years[-1] years <- years[-length(years)] # initialize output matrices sign.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) pval.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) slope.matrix <- matrix(data = NA, nrow = length(years), ncol = length(years)) # function to return remaining years given a start year getRemain <- function(start.year) { years <- data %>% pull("Year") start.ind <- which(data[["Year"]] == start.year) + 1 remain <- years[start.ind:length(years)] return (remain) } # function to subset data for a start/end year combination splitData <- function(end.year, start.year) { keep <- which(data[['Year']] >= start.year & data[['Year']] <= end.year) batch <- data[keep,] return(batch) } # function to fit linear regression and return slope direction fitReg <- function(batch) { trend <- lm(Value ~ Year, data = batch) slope <- coefficients(trend)[[2]] return(sign(slope)) } # function to fit linear regression and return slope magnitude fitRegv2 <- function(batch) { trend <- lm(Value ~ Year, data = batch) slope <- coefficients(trend)[[2]] return(slope) } # function to implement Mann-Kendall (MK) trend test and return significance # the test is implemented only for n>=8 getMann <- function(batch) { if (nrow(batch) >= 8) { mk <- MannKendall(batch[['Value']]) pval <- mk[['sl']] } else { pval <- NA } return(pval) } # function to return slope direction for all combinations given a start year getSign <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) signs <- lapply(combs, fitReg) return(signs) } # function to return MK significance for all combinations given a start year getPval <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) pvals <- lapply(combs, getMann) return(pvals) } # function to return slope magnitude for all combinations given a start year getMagn <- function(start.year) { remaining <- getRemain(start.year) combs <- lapply(remaining, splitData, start.year = start.year) magns <- lapply(combs, fitRegv2) return(magns) } # retrieve slope direction, MK significance, and slope magnitude signs <- lapply(years, getSign) pvals <- lapply(years, getPval) magns <- lapply(years, getMagn) # fill-in output matrices dimension <- nrow(sign.matrix) for (i in 1:dimension) { sign.matrix[i, i:dimension] <- unlist(signs[i]) pval.matrix[i, i:dimension] <- unlist(pvals[i]) slope.matrix[i, i:dimension] <- unlist(magns[i]) } sign.matrix <-...