Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
A simple table time series for school probability and statistics. We have to learn how to investigate data: value via time. What we try to do: - mean: average is the sum of all values divided by the number of values. It is also sometimes referred to as mean. - median is the middle number, when in order. Mode is the most common number. Range is the largest number minus the smallest number. - standard deviation s a measure of how dispersed the data is in relation to the mean.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Sheet 1 (Raw-Data): The raw data of the study is provided, presenting the tagging results for the used measures described in the paper. For each subject, it includes multiple columns: A. a sequential student ID B an ID that defines a random group label and the notation C. the used notation: user Story or use Cases D. the case they were assigned to: IFA, Sim, or Hos E. the subject's exam grade (total points out of 100). Empty cells mean that the subject did not take the first exam F. a categorical representation of the grade L/M/H, where H is greater or equal to 80, M is between 65 included and 80 excluded, L otherwise G. the total number of classes in the student's conceptual model H. the total number of relationships in the student's conceptual model I. the total number of classes in the expert's conceptual model J. the total number of relationships in the expert's conceptual model K-O. the total number of encountered situations of alignment, wrong representation, system-oriented, omitted, missing (see tagging scheme below) P. the researchers' judgement on how well the derivation process explanation was explained by the student: well explained (a systematic mapping that can be easily reproduced), partially explained (vague indication of the mapping ), or not present.
Tagging scheme:
Aligned (AL) - A concept is represented as a class in both models, either
with the same name or using synonyms or clearly linkable names;
Wrongly represented (WR) - A class in the domain expert model is
incorrectly represented in the student model, either (i) via an attribute,
method, or relationship rather than class, or
(ii) using a generic term (e.g., user'' instead ofurban
planner'');
System-oriented (SO) - A class in CM-Stud that denotes a technical
implementation aspect, e.g., access control. Classes that represent legacy
system or the system under design (portal, simulator) are legitimate;
Omitted (OM) - A class in CM-Expert that does not appear in any way in
CM-Stud;
Missing (MI) - A class in CM-Stud that does not appear in any way in
CM-Expert.
All the calculations and information provided in the following sheets
originate from that raw data.
Sheet 2 (Descriptive-Stats): Shows a summary of statistics from the data collection,
including the number of subjects per case, per notation, per process derivation rigor category, and per exam grade category.
Sheet 3 (Size-Ratio):
The number of classes within the student model divided by the number of classes within the expert model is calculated (describing the size ratio). We provide box plots to allow a visual comparison of the shape of the distribution, its central value, and its variability for each group (by case, notation, process, and exam grade) . The primary focus in this study is on the number of classes. However, we also provided the size ratio for the number of relationships between student and expert model.
Sheet 4 (Overall):
Provides an overview of all subjects regarding the encountered situations, completeness, and correctness, respectively. Correctness is defined as the ratio of classes in a student model that is fully aligned with the classes in the corresponding expert model. It is calculated by dividing the number of aligned concepts (AL) by the sum of the number of aligned concepts (AL), omitted concepts (OM), system-oriented concepts (SO), and wrong representations (WR). Completeness on the other hand, is defined as the ratio of classes in a student model that are correctly or incorrectly represented over the number of classes in the expert model. Completeness is calculated by dividing the sum of aligned concepts (AL) and wrong representations (WR) by the sum of the number of aligned concepts (AL), wrong representations (WR) and omitted concepts (OM). The overview is complemented with general diverging stacked bar charts that illustrate correctness and completeness.
For sheet 4 as well as for the following four sheets, diverging stacked bar
charts are provided to visualize the effect of each of the independent and mediated variables. The charts are based on the relative numbers of encountered situations for each student. In addition, a "Buffer" is calculated witch solely serves the purpose of constructing the diverging stacked bar charts in Excel. Finally, at the bottom of each sheet, the significance (T-test) and effect size (Hedges' g) for both completeness and correctness are provided. Hedges' g was calculated with an online tool: https://www.psychometrica.de/effect_size.html. The independent and moderating variables can be found as follows:
Sheet 5 (By-Notation):
Model correctness and model completeness is compared by notation - UC, US.
Sheet 6 (By-Case):
Model correctness and model completeness is compared by case - SIM, HOS, IFA.
Sheet 7 (By-Process):
Model correctness and model completeness is compared by how well the derivation process is explained - well explained, partially explained, not present.
Sheet 8 (By-Grade):
Model correctness and model completeness is compared by the exam grades, converted to categorical values High, Low , and Medium.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This data set was generated in accordance with the semiconductor industry and contains values of summary statistics from sensor recordings of the high-precision and high-tech production equipment. Basically, the semiconductor production consists of hundreds of process steps performing physical and chemical operations on so-called wafers, i.e. slices based on semiconductor material. In the production chain, each process equipment is equipped with several sensors recording physical parameters like gas flow, temperature, voltage, etc., resulting in so-called sensor data. Out of the sensor data, values of summary statistics are extracted. These are values like mean, standard deviation and gradients. To keep the entire production as stable as possible, these values are used to monitor the whole production in order to intervene in case of deviations.
After the production, each device on the wafer is tested in the most careful way resulting in so-called wafer test data. In some cases, suspicious patterns occur in the wafer test data potentially leading to failure. In this case the root cause must be found in the production chain. For this purpose, the given data is provided. The aim is to find correlations between the wafer test data and the values of summary statistics in order to identify the root cause.
The given data is divided into four data sets: "XTrain.csv", "YTrain.csv", "XTest.csv" and "YTest.csv". "XTrain.csv" and "XTest.csv" represent the values of summary statistics originating in the production chain separated for the purpose of training and validating a statistical model. Included are 114 observations of 77 parameters (values of summary statistics). The "YTrain.csv" and "YTest.csv" contain the corresponding wafer test data (144 observations of one parameter).
Facebook
TwitterData licence Germany – Attribution – Version 2.0https://www.govdata.de/dl-de/by-2-0
License information was derived automatically
Data from various sources are updated in the Statistical Information System of the City of Cologne. The annual statistical yearbook publishes these in tabular, graphic and cartographic form at the level of the city districts and districts. Furthermore, definitions and calculation bases are explained. Small-scale statistics at the level of the 86 districts can be obtained from the Cologne district information become. All levels of the local area structure are presented in this publication explained.
This statistical data catalogue supplements the range of small-scale data. Selected structural data can be called up here in compact tabular form at the level of the 570 statistical districts or the 86 districts. The two overviews provide information about which data is available and from which source it originates. The data itself is provided annually.
Notes:
Facebook
TwitterThe Economic Census is the U.S. Government's official five-year measure of American business and the economy. It is conducted by the U.S. Census Bureau, and response is required by law. In October through December of the census year, forms are sent out to nearly 4 million businesses, including large, medium and small companies representing all U.S. locations and industries. Respondents were asked to provide a range of operational and performance data for their companies. This dataset presents company, establishments, value of shipments, value of product shipments, percentage of product shipments of the total value of shipments, and percentage of distribution of value of product shipments.
Facebook
TwitterBy data.world's Admin [source]
This dataset provides insight into the mental health services available to children and young people in England. The data includes all primary and secondary levels of care, as well as breakdowns by age group. Information is provided on the number of people in contact with mental health services; open ward stays; open referrals; referrals starting in reporting period; attended contacts; indirect activity; discharged from referral; missed care contacts by DNA reasons and more. With these statistics, analysts may be able to better understand the scope of mental health service usage across different age groups in England and make valuable conclusions about best practices for helping children & young people receive proper care
For more datasets, click here.
- 🚨 Your notebook can be here! 🚨!
This guide provides information on how to use this dataset effectively.
Understanding the Columns:
Each row represents data from a specific month within a reporting period. The first thing to do is to find out what each column represents - this is explained by their titles and descriptions included at the beginning of this dataset. Note that there are primary level columns (e.g., Reporting Period, Breakdown) which provide overall context while secondary level columns (e.g., CYP01 People in contact with children and young peoples' mentally health service…) provide more detail on specific indicators of interest related to that primary level column value pair (i.e., Reporting Period X).
Exploring Data Variables:
The next step is exploring which data variables could potentially be helpful when analyzing initiatives/programs related to mental health care for children & youth in England or developing policies related to them – look through all columns included here for ones you think would be most helpful such as ‘CYP21 – Open ward stays...’ or ‘MHS07a - People with an open hospital spell…’ and note down those that have been considered necessary/relevant based on your particular situation/needs before further analyzing using software packages like Excel or SPSS etc..
Analyzing Data Values:
Now comes the time for analyzing individual values provided under each respective column – take one single numerical data element such as ‘CYP02 – People… CPA end RP’ & run through it all looking at trends over time, averages across different sections by performing calculations via software packages available like tables provided above based upon sorted hierarchies needed.. Then you can then start looking into making meaningful correlations between different pieces of information given herein by cross-referencing contexts against each other resulting if any noticeable patterns found significant enough will make informative decisions towards policy implementations & program improvement opportunities both directly concerned
- Using this dataset to identify key trends in mental health services usage among children and young people in England, such as the number of open ward stays and referrals received.
- Using the information to develop targeted solutions on areas of need identified from the data by geographical area or age group, i.e creating campaigns or programs specifically targeting specific groups at a higher risk of experiencing mental health difficulties or engaging with specialist services.
- Tracking how well these initiatives are working over time by monitoring relevant metrics such as attendance at appointments, open referrals etc to evaluate their effectiveness in improving access and engagement with mental health services for those most in need
If you use this dataset in your research, please credit the original authors. Data Source
License: Dataset copyright by authors - You are free to: - Share - copy and redistribute the material in any medium or format for any purpose, even commercially. - Adapt - remix, transform, and build upon the material for any purpose, even commercially. - You must: - Give appropriate credit - Provide a link to the license, and indicate if changes were made. - ShareAlike - You must distribute your contributions under the same license as the original. - ...
Facebook
TwitterThe global big data market is forecasted to grow to 103 billion U.S. dollars by 2027, more than double its expected market size in 2018. With a share of 45 percent, the software segment would become the large big data market segment by 2027. What is Big data? Big data is a term that refers to the kind of data sets that are too large or too complex for traditional data processing applications. It is defined as having one or some of the following characteristics: high volume, high velocity or high variety. Fast-growing mobile data traffic, cloud computing traffic, as well as the rapid development of technologies such as artificial intelligence (AI) and the Internet of Things (IoT) all contribute to the increasing volume and complexity of data sets. Big data analytics Advanced analytics tools, such as predictive analytics and data mining, help to extract value from the data and generate new business insights. The global big data and business analytics market was valued at 169 billion U.S. dollars in 2018 and is expected to grow to 274 billion U.S. dollars in 2022. As of November 2018, 45 percent of professionals in the market research industry reportedly used big data analytics as a research method.
Facebook
TwitterThis data set contains annual quantities and value for all seafood products that are landed and sold by established seafood dealers and brokers in the Southeast Region (North Carolina through Texas). These types of data, referred to as the general canvass landings statistics, have been collected by the NOAA Fisheries Service, National Marine Fisheries Service and its predecessor agency, the Bureau of Commercial Fisheries. The data are available on computer since the early 1960's. The quantities and values that are reported in this data set include the annual landings that were initiated in 1962. Beginning in 1976, the data were collected monthly. See the sections on Links for the reference to the monthly general canvass landings. The annual general canvass landings include quantities and value for all living marine species and are identified by species (usually the local or common name). These data were collected by field agents employed by the National Marine Fisheries Service or the Bureau of Commercial Fisheries and assigned to local fishing ports. The agents contacted the majority of the seafood dealers or brokers in their assigned areas and recorded the quantities and value for each species or species category from the sales receipts maintained by the seafood dealers. In addition, information on the gear and area of capture is available for most of the landings statistics in the data set. Based on their knowledge of the fishing activity in the area, the agents would estimate the type of fishing gear and area where the fishing was likely to have occurred. More detailed information on the caveats associated with these data is provided in the Characteristics, Caveats and Issues section. However, because these data are summaries, they do not contain information on the quantities of fishing effort or identifications of the fishermen or vessels that caught the fish or shellfish.
Facebook
TwitterThe total amount of data created, captured, copied, and consumed globally is forecast to increase rapidly. While it was estimated at ***** zettabytes in 2025, the forecast for 2029 stands at ***** zettabytes. Thus, global data generation will triple between 2025 and 2029. Data creation has been expanding continuously over the past decade. In 2020, the growth was higher than previously expected, caused by the increased demand due to the coronavirus (COVID-19) pandemic, as more people worked and learned from home and used home entertainment options more often.
Facebook
TwitterThis report describes the quality assurance arrangements for the registered provider (RP) Tenant Satisfaction Measures statistics, providing more detail on the regulatory and operational context for data collections which feed these statistics and the safeguards that aim to maximise data quality.
The statistics we publish are based on data collected directly from local authority registered provider (LARPs) and from private registered providers (PRPs) through the Tenant Satisfaction Measures (TSM) return. We use the data collected through these returns extensively as a source of administrative data. The United Kingdom Statistics Authority (UKSA) encourages public bodies to use administrative data for statistical purposes and, as such, we publish these data.
These data are first being published in 2024, following the first collection and publication of the TSM.
In February 2018, the UKSA published the Code of Practice for Statistics. This sets standards for organisations producing and publishing statistics, ensuring quality, trustworthiness and value.
These statistics are drawn from our TSM data collection and are being published for the first time in 2024 as official statistics in development.
Official statistics in development are official statistics that are undergoing development. Over the next year we will review these statistics and consider areas for improvement to guidance, validations, data processing and analysis. We will also seek user feedback with a view to improving these statistics to meet user needs and to explore issues of data quality and consistency.
Until September 2023, ‘official statistics in development’ were called ‘experimental statistics’. Further information can be found on the https://www.ons.gov.uk/methodology/methodologytopicsandstatisticalconcepts/guidetoofficialstatisticsindevelopment">Office for Statistics Regulation website.
We are keen to increase the understanding of the data, including the accuracy and reliability, and the value to users. Please https://forms.office.com/e/cetNnYkHfL">complete the form or email feedback, including suggestions for improvements or queries as to the source data or processing to enquiries@rsh.gov.uk.
We intend to publish these statistics in Autumn each year, with the data pre-announced in the release calendar.
All data and additional information (including a list of individuals (if any) with 24 hour pre-release access) are published on our statistics pages.
The data used in the production of these statistics are classed as administrative data. In 2015 the UKSA published a regulatory standard for the quality assurance of administrative data. As part of our compliance to the Code of Practice, and in the context of other statistics published by the UK Government and its agencies, we have determined that the statistics drawn from the TSMs are likely to be categorised as low-quality risk – medium public interest (with a requirement for basic/enhanced assurance).
The publication of these statistics can be considered as medium publi
Facebook
TwitterThe global big data and business analytics (BDA) market was valued at ***** billion U.S. dollars in 2018 and is forecast to grow to ***** billion U.S. dollars by 2021. In 2021, more than half of BDA spending will go towards services. IT services is projected to make up around ** billion U.S. dollars, and business services will account for the remainder. Big data High volume, high velocity and high variety: one or more of these characteristics is used to define big data, the kind of data sets that are too large or too complex for traditional data processing applications. Fast-growing mobile data traffic, cloud computing traffic, as well as the rapid development of technologies such as artificial intelligence (AI) and the Internet of Things (IoT) all contribute to the increasing volume and complexity of data sets. For example, connected IoT devices are projected to generate **** ZBs of data in 2025. Business analytics Advanced analytics tools, such as predictive analytics and data mining, help to extract value from the data and generate business insights. The size of the business intelligence and analytics software application market is forecast to reach around **** billion U.S. dollars in 2022. Growth in this market is driven by a focus on digital transformation, a demand for data visualization dashboards, and an increased adoption of cloud.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Historical dataset showing U.S. hunger statistics by year from 2001 to 2022.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Each sheet contains data and statistics generated for the research. Refer to the table legends bellow.Supplementary Table S1. Core microbiome in gut.Supplementary Table S2. Core microbiome in lung.Supplementary Table S3. Adjacency matrix of gut and lung microbiome in healthy controls.Supplementary Table S4. Adjacency matrix of gut and lung microbiome in patients with TB before treatment.Supplementary Table S5. Adjacency matrix of gut and lung microbiome in patients with TB after treatment.Supplementary Table S6. Node score of gut and lung microbiome in healthy controls.Supplementary Table S7. Node score of gut and lung microbiome in patients with TB before treatment.Supplementary Table S8. Node score of gut and lung microbiome in patients with TB after treatment.Supplementary Table S9. Significant results from differential abundance analysis of gut microbiota between HCs and patients with TB before treatment in the gut. Log fold change (logFC) represents the relative abundance in patients with TB compared to healthy controls. The logFC, p-values, and false discovery rates (FDR) were obtained from ANCOM-BC, and linear discriminant analysis (LDA) scores were calculated using LEfSe.Supplementary Table S10. Significant results from differential abundance analysis of gut microbiota between HCs and patients with TB before treatment in the lung. The logFC represents the log-transformed expression values of patients with TB relative to those of healthy controls (HC). The logFC, p-values, and false discovery rates (FDR) were obtained from ANCOM-BC, while the linear discriminant analysis (LDA) scores were derived from LEfSe.Supplementary Table S11. Significant results from differential abundance analysis of gut microbiota between patients with TB before and after treatment in the gut. The logFC represents the log-transformed expression values of patients with TB after treatment relative to those of patients with TB before treatment. The logFC, p-values, and false discovery rates (FDR) were obtained from ANCOM-BC, while the linear discriminant analysis (LDA) scores were derived from LEfSe.Supplementary Table S12. Significant results from differential abundance analysis of gut microbiota between patients with TB before and after treatment in the lung. The logFC represents the log-transformed expression values of patients with TB after treatment relative to those of patients with TB before treatment. The logFC, p-values, and false discovery rates (FDR) were obtained from ANCOM-BC, while the linear discriminant analysis (LDA) scores were derived from LEfSe.Supplementary Tables S13. Taxa clustering results in healthy controls.Supplementary Tables S14. Taxa clustering results in patients with TB before treatment.Supplementary Tables S15. Taxa clustering results in patients with TB after treatment.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The data has been collected through a survey of enterprises with at least 50 persons employed.
The data category covers a group of variables which provide relevant statistical evidence and information about factors driving international sourcing e.g. the impact on the competitiveness, motivations and perceived barriers together with possible employment consequences in the Member State.
There have been four collection rounds:
The data focuses on the relocation of core and support business functions of enterprises in the business economy sector, from domestic to abroad and vice versa, as a result of decisions taken by the domestic enterprises.
In summary, the collected indicators are :
The dimensions used to describe the International sourcing in the 2021 collection round are:
In the 2021 collection round two new dimensions were include:
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The data has been collected through a survey of enterprises with at least 50 persons employed.
The data category covers a group of variables which provide relevant statistical evidence and information about factors driving international sourcing e.g. the impact on the competitiveness, motivations and perceived barriers together with possible employment consequences in the Member State.
There have been four collection rounds:
The data focuses on the relocation of core and support business functions of enterprises in the business economy sector, from domestic to abroad and vice versa, as a result of decisions taken by the domestic enterprises.
In summary, the collected indicators are :
The dimensions used to describe the International sourcing in the 2021 collection round are:
In the 2021 collection round two new dimensions were include:
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
dataset and Octave/MatLab codes/scripts for data analysis Background: Methods for p-value correction are criticized for either increasing Type II error or improperly reducing Type I error. This problem is worse when dealing with thousands or even hundreds of paired comparisons between waves or images which are performed point-to-point. This text considers patterns in probability vectors resulting from multiple point-to-point comparisons between two event-related potentials (ERP) waves (mass univariate analysis) to correct p-values, where clusters of signiticant p-values may indicate true H0 rejection. New method: We used ERP data from normal subjects and other ones with attention deficit hyperactivity disorder (ADHD) under a cued forced two-choice test to study attention. The decimal logarithm of the p-vector (p') was convolved with a Gaussian window whose length was set as the shortest lag above which autocorrelation of each ERP wave may be assumed to have vanished. To verify the reliability of the present correction method, we realized Monte-Carlo simulations (MC) to (1) evaluate confidence intervals of rejected and non-rejected areas of our data, (2) to evaluate differences between corrected and uncorrected p-vectors or simulated ones in terms of distribution of significant p-values, and (3) to empirically verify rate of type-I error (comparing 10,000 pairs of mixed samples whit control and ADHD subjects). Results: the present method reduced the range of p'-values that did not show covariance with neighbors (type I and also type-II errors). The differences between simulation or raw p-vector and corrected p-vectors were, respectively, minimal and maximal for window length set by autocorrelation in p-vector convolution. Comparison with existing methods: Our method was less conservative while FDR methods rejected basically all significant p-values for Pz and O2 channels. The MC simulations, gold-standard method for error correction, presented 2.78±4.83% of difference (all 20 channels) from p-vector after correction, while difference between raw and corrected p-vector was 5,96±5.00% (p = 0.0003). Conclusion: As a cluster-based correction, the present new method seems to be biological and statistically suitable to correct p-values in mass univariate analysis of ERP waves, which adopts adaptive parameters to set correction.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Historical dataset showing Taiwan hunger statistics by year from N/A to N/A.
Facebook
TwitterWe wish to answer this question: If you observe a ‘significant’ p-value after doing a single unbiased experiment, what is the probability that your result is a false positive? The weak evidence provided by p-values between 0.01 and 0.05 is explored by exact calculations of false positive risks. When you observe p = 0.05, the odds in favour of there being a real effect (given by the likelihood ratio) are about 3 : 1. This is far weaker evidence than the odds of 19 to 1 that might, wrongly, be inferred from the p-value. And if you want to limit the false positive risk to 5%, you would have to assume that you were 87% sure that there was a real effect before the experiment was done. If you observe p = 0.001 in a well-powered experiment, it gives a likelihood ratio of almost 100 : 1 odds on there being a real effect. That would usually be regarded as conclusive. But the false positive risk would still be 8% if the prior probability of a real effect were only 0.1. And, in this case, if you w...
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Short-term business statistics (STS) give information on a wide range of economic activities. All STS data are index data. Additionally, annual absolute values are released for building permits indicators. Percentage changes are also available for each indicator: Infra-annual percentage changes - changes between two consecutive months or quarters - are calculated on the basis of non-adjusted data (prices) or calendar and seasonally adjusted data (volume and value indicators) and year-on-year changes - comparing a period to the same period one year ago - are calculated on the basis of non-adjusted data (prices and employment) or calendar adjusted data (volume and value indicators).
The index data are generally presented in the following forms:
Depending on the EBS Regulation data are accessible as monthly, quarterly and annual data.
The STS indicators are listed below in five different sectors, reflecting the dissemination of these data in Eurostat’s online database “Eurobase”.
Based on the national data, Eurostat compiles short-term indicators for the EU and euro area. Among these, a list of indicators, called Principal European Economic Indicators (PEEIs) has been identified by key users as being of primary importance for the conduct of monetary and economic policy of the euro area. The PEEIs contributed by STS are marked with * in the text below.
The euro indicators are released through Eurostat's website.
INDUSTRY
CONSTRUCTION
TRADE
SERVICES
MARKET ECONOMY
National reference metadata of the reporting countries are available in the Annexes to this metadata file.
Facebook
Twitterhttps://data.gov.sg/open-data-licencehttps://data.gov.sg/open-data-licence
Dataset from Singapore Department of Statistics. For more information, visit https://data.gov.sg/datasets/d_5300314b2a47151f85b1786e449e860c/view
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
A simple table time series for school probability and statistics. We have to learn how to investigate data: value via time. What we try to do: - mean: average is the sum of all values divided by the number of values. It is also sometimes referred to as mean. - median is the middle number, when in order. Mode is the most common number. Range is the largest number minus the smallest number. - standard deviation s a measure of how dispersed the data is in relation to the mean.