Excel spreadsheets by species (4 letter code is abbreviation for genus and species used in study, year 2010 or 2011 is year data collected, SH indicates data for Science Hub, date is date of file preparation). The data in a file are described in a read me file which is the first worksheet in each file. Each row in a species spreadsheet is for one plot (plant). The data themselves are in the data worksheet. One file includes a read me description of the column in the date set for chemical analysis. In this file one row is an herbicide treatment and sample for chemical analysis (if taken). This dataset is associated with the following publication: Olszyk , D., T. Pfleeger, T. Shiroyama, M. Blakely-Smith, E. Lee , and M. Plocher. Plant reproduction is altered by simulated herbicide drift toconstructed plant communities. ENVIRONMENTAL TOXICOLOGY AND CHEMISTRY. Society of Environmental Toxicology and Chemistry, Pensacola, FL, USA, 36(10): 2799-2813, (2017).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This article describes a free, open-source collection of templates for the popular Excel (2013, and later versions) spreadsheet program. These templates are spreadsheet files that allow easy and intuitive learning and the implementation of practical examples concerning descriptive statistics, random variables, confidence intervals, and hypothesis testing. Although they are designed to be used with Excel, they can also be employed with other free spreadsheet programs (changing some particular formulas). Moreover, we exploit some possibilities of the ActiveX controls of the Excel Developer Menu to perform interactive Gaussian density charts. Finally, it is important to note that they can be often embedded in a web page, so it is not necessary to employ Excel software for their use. These templates have been designed as a useful tool to teach basic statistics and to carry out data analysis even when the students are not familiar with Excel. Additionally, they can be used as a complement to other analytical software packages. They aim to assist students in learning statistics, within an intuitive working environment. Supplementary materials with the Excel templates are available online.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Sample data for exercises in Further Adventures in Data Cleaning.
Operational Analysis is a method of examining the current and historical performance of the operations and maintenance investments and measuring that performance against an established set of cost, schedule, and performance parameters. The Operational Analysis template is used as a guide in preparing and documenting SSA's Operational Analyses.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In the attached Excel file, "Example Student Data", there are 6 sheets. There are three sheets with sample datasets, one for each of the three different exercise protocols described. Additionally, there are three sheets with sample graphs created using one of the three datasets. · Sheets 1 and 2: This is an example of a dataset and graph created from an exercise protocol designed to stress the creatine phosphate system. Here, the subject was a track and field athlete who threw the shot put for the DeSales University track team. The NIRS monitor was placed on the right triceps muscle, and the student threw the shot put six times with a minute rest in between throws. Data was collected telemetrically by the NIRS device and then downloaded after the student had completed the protocol. · Sheets 3 and 4: This is an example of a dataset and graph created from an exercise protocol designed to stress the glycolytic energy system. In this example, the subject performed continuous squat jumps for 30 seconds, followed by a 90 second rest period, for a total of three exercise bouts. The NIRS monitor was place on the left gastrocnemius muscle. Here again, data was collected telemetrically by the NIRS device and then downloaded after he had completed the protocol. · Sheets 5 and 6: In this example, the dataset and graph are from an exercise protocol designed to stress the oxidative system. Here, the student held a light-intensity, isometric biceps contraction (pushing against a table). The NIRS monitor was attached to the left biceps muscle belly. Here, data was collected by a student observing the SmO2 values displayed on a secondary device; specifically, a smartphone with the IPSensorMan APP displaying data. The recorder student observed and recorded the data on an Excel Spreadsheet, and marked the times that exercise began and ended on the Spreadsheet.
This repository contains the data supporting the manuscript "A Generic Scenario Analysis of End-of-Life Plastic Management: Chemical Additives" (to be) submitted to the Energy and Environmental Science Journal https://pubs.rsc.org/en/journals/journalissues/ee#!recentarticles&adv This repository contains Excel spreadsheets used to calculate material flow throughout the plastics life cycle, with a strong emphasis on chemical additives in the end-of-life stages. Three major scenarios were presented in the manuscript: 1) mechanical recycling (existing recycling infrastructure), 2) implementing chemical recycling to the existing plastics recycling, and 3) extracting chemical additives before the manufacturing stage. Users would primarily modify values on the yellow tab "US 2018 Facts - Sensitivity". Values highlighted in yellow may be changed for sensitivity analysis purposes. Please note that the values shown for MSW generated, recycled, incinerated, landfilled, composted, imported, exported, re-exported, and other categories in this tab were based on 2018 data. Analysis for other years can be made possible with a replicate version of this spreadsheet and the necessary data to replace those of 2018. Most of the tabs, especially those that contain "Stream # - Description", do not require user interaction. They are intermediate calculations that change according to the user inputs. It is available for the user to see so that the calculation/method is transparent. The major results of these individual stream tabs are ultimately compiled into one summary tab. All streams throughout the plastics life cycle, for each respective scenario (1, 2, and 3), are shown in the "US Mat Flow Analysis 2018" tab. For each stream, we accounted the approximate mass of plastics found in MSW, additives that may be present, and non-plastics. Each spreadsheet contains a representative diagram that matches the stream label. This illustration is placed to aid the user with understanding the connection between each stage in the plastics' life cycle. For example, the Scenario 1 spreadsheet uniquely contains Material Flow Analysis Summary, in addition to the LCI. In the "Material Flow Analysis Summary" tab, we represented the input, output, releases, exposures, and greenhouse gas emissions based on the amount of materials inputted into a specific stage in the plastics life cycle. The "Life Cycle Inventory" tab contributes additional calculations to estimate land, air, and water releases. Figures and Data - A gs analysis on eol plastic management This word document contains the raw data used to create all the figures in the main manuscript. The major references used to obtain the data are also included where appropriate.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Spreadsheets targeted at the analysis of GHS safety fingerprints.AbstractOver a 20-year period, the UN developed the Globally Harmonized System (GHS) to address international variation in chemical safety information standards. By 2014, the GHS became widely accepted internationally and has become the cornerstone of OSHA’s Hazard Communication Standard. Despite this progress, today we observe that there are inconsistent results when different sources apply the GHS to specific chemicals, in terms of the GHS pictograms, hazard statements, precautionary statements, and signal words assigned to those chemicals. In order to assess the magnitude of this problem, this research uses an extension of the “chemical fingerprints” used in 2D chemical structure similarity analysis to GHS classifications. By generating a chemical safety fingerprint, the consistency of the GHS information for specific chemicals can be assessed. The problem is the sources for GHS information can differ. For example, the SDS for sodium hydroxide pellets found on Fisher Scientific’s website displays two pictograms, while the GHS information for sodium hydroxide pellets on Sigma Aldrich’s website has only one pictogram. A chemical information tool, which identifies such discrepancies within a specific chemical inventory, can assist in maintaining the quality of the safety information needed to support safe work in the laboratory. The tools for this analysis will be scaled to the size of a moderate large research lab or small chemistry department as a whole (between 1000 and 3000 chemical entities) so that labelling expectations within these universes can be established as consistently as possible.Most chemists are familiar with programs such as excel and google sheets which are spreadsheet programs that are used by many chemists daily. Though a monadal programming approach with these tools, the analysis of GHS information can be made possible for non-programmers. This monadal approach employs single spreadsheet functions to analyze the data collected rather than long programs, which can be difficult to debug and maintain. Another advantage of this approach is that the single monadal functions can be mixed and matched to meet new goals as information needs about the chemical inventory evolve over time. These monadal functions will be used to converts GHS information into binary strings of data called “bitstrings”. This approach is also used when comparing chemical structures. The binary approach make data analysis more manageable, as GHS information comes in a variety of formats such as pictures or alphanumeric strings which are difficult to compare on their face. Bitstrings generated using the GHS information can be compared using an operator such as the tanimoto coefficent to yield values from 0 for strings that have no similarity to 1 for strings that are the same. Once a particular set of information is analyzed the hope is the same techniques could be extended to more information. For example, if GHS hazard statements are analyzed through a spreadsheet approach the same techniques with minor modifications could be used to tackle more GHS information such as pictograms.Intellectual Merit. This research indicates that the use of the cheminformatic technique of structural fingerprints can be used to create safety fingerprints. Structural fingerprints are binary bit strings that are obtained from the non-numeric entity of 2D structure. This structural fingerprint allows comparison of 2D structure through the use of the tanimoto coefficient. The use of this structural fingerprint can be extended to safety fingerprints, which can be created by converting a non-numeric entity such as GHS information into a binary bit string and comparing data through the use of the tanimoto coefficient.Broader Impact. Extension of this research can be applied to many aspects of GHS information. This research focused on comparing GHS hazard statements, but could be further applied to other bits of GHS information such as pictograms and GHS precautionary statements. Another facet of this research is allowing the chemist who uses the data to be able to compare large dataset using spreadsheet programs such as excel and not need a large programming background. Development of this technique will also benefit the Chemical Health and Safety community and Chemical Information communities by better defining the quality of GHS information available and providing a scalable and transferable tool to manipulate this information to meet a variety of other organizational needs.
No description was included in this Dataset collected from the OSF
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is a dataset analysis regarding our previous research and the current research. it is the result of our observations over 3 years of monitoring and is provided briefly within our 1st publication: https://doi.org/10.5281/zenodo.10407923.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Raw data used in analysis of determinants of dividend policy - a case of banking sector in Serbia.
Introduction : Equations can calculate pulse wave velocity (ePWV) from blood pressure values (BP) and age. The ePWV predicts cardiovascular events beyond carotid-femoral PWV. We aimed to evaluate the correlation between four different equations to calculate ePWV. Methods: The ePWV was estimated utilizing mean BP (MBP) from office BP (MBPOBP) or 24-hour ambulatory BP (MBP24-hBP). We separated the whole sample into two groups: individuals with risk factors and healthy individuals. The e-PWV was calculated as follows:  We calculated the concordance correlation coefficient (Pc) between e1-PWVOBP vs e2-PWVOBP, e1-PWV24-hBP vs e2-PWV24-hBP, and mean values of e1-PWVOBP, e2-PWVOBP, e1-PWV24-hBP, and e2-PWV24-hBP . The multilevel regression model determined how much the ePWVs are influenced by age and MBP values. Results: We analyzed data from 1541 individuals; 1374 ones with risk factors and 167 healthy ones. The values are presented for the entire sample, for risk-factor patients and for he..., This study is a secondary analysis of data obtained from two cross-sectional studies conducted at a specialized center in Brazil to diagnose and treat non-communicable diseases. In both studies, the inclusion criteria were adults aged 18 years and above, referred to undergo ambulatory blood pressure monitoring (ABPM) due to suspected non-treated or uncontrolled hypertension following initial blood pressure measurements by a physician. The combined databases included 1541 people. For the first database, we recruited participants between 28 January and 13 December 2013, and for the second database, between 23 January 2016 and 28 June 2019. Prior to being fitted with an AMBP device and assisted by a trained nurse, all participants signed a written consent form to partake in the research. Later, the nurse collected demographic and clinical data, including any previous reports of clinical cardiovascular disease (CVD), acute myocardial infarction, acute coronary syndrome, coronary or other a..., , # ePWV_PLOS_ONE
Give a brief summary of dataset contents, contextualized in experimental procedures and results.
The database includes variables from two other databases. We collected only the interest variables of the manuscript from them. The ePWV_PLOS_ONE database presents all the data described in the paper. We used Microsoft Excel Worksheet version 2013 to include the data. The spreadsheet has 36 columns (A to AI) and 1542 rows (2 to 1542). The ePWV_PLOS_ONE contains two spreadsheets, DATABASE and LEGENDS. DATABASE presents all data from 1541 subjects. The LEGENDS spreadsheet describes the meaning of variable abbreviations.
Data was derived from the following sources:
n/a
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Analysis of ‘Coffee shop sample data (11.1.3+)’ provided by Analyst-2 (analyst-2.ai), based on source dataset retrieved from https://www.kaggle.com/ylchang/coffee-shop-sample-data-1113 on 28 January 2022.
--- Dataset description provided by original source is as follows ---
This sample data module contains representative retail data from a fictional coffee chain. The source data is contained in an uploaded file named April Sales.zip. Source: IBM.
We have created sample data for a fictional coffee shop chain with three locations in New York city. The chain has purchased IBM Cognos Analytics to identify factors that contribute to their success, and ultimately to make data-informed decisions.
Amber and Sandeep are the co-founders of the coffee chain. They uploaded their data in a series of spreadsheets and created a data module. From that data, they designed an operations dashboard and a marketing dashboard.
Inventory
Amber and Sandeep have created two dashboards and one data module that is based on nine spreadsheets:
Data
The sample data module named Coffee sales and marketing can be found in Team content > Samples > Data. There are nine tables:
--- Original source retains full ownership of the source dataset ---
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This zip file contains data files for 3 activities described in the accompanying PPT slides 1. an excel spreadsheet for analysing gain scores in a 2 group, 2 times data array. this activity requires access to –https://campbellcollaboration.org/research-resources/effect-size-calculator.html to calculate effect size.2. an AMOS path model and SPSS data set for an autoregressive, bivariate path model with cross-lagging. This activity is related to the following article: Brown, G. T. L., & Marshall, J. C. (2012). The impact of training students how to write introductions for academic essays: An exploratory, longitudinal study. Assessment & Evaluation in Higher Education, 37(6), 653-670. doi:10.1080/02602938.2011.5632773. an AMOS latent curve model and SPSS data set for a 3-time latent factor model with an interaction mixed model that uses GPA as a predictor of the LCM start and slope or change factors. This activity makes use of data reported previously and a published data analysis case: Peterson, E. R., Brown, G. T. L., & Jun, M. C. (2015). Achievement emotions in higher education: A diary study exploring emotions across an assessment event. Contemporary Educational Psychology, 42, 82-96. doi:10.1016/j.cedpsych.2015.05.002andBrown, G. T. L., & Peterson, E. R. (2018). Evaluating repeated diary study responses: Latent curve modeling. In SAGE Research Methods Cases Part 2. Retrieved from http://methods.sagepub.com/case/evaluating-repeated-diary-study-responses-latent-curve-modeling doi:10.4135/9781526431592
ODC Public Domain Dedication and Licence (PDDL) v1.0http://www.opendatacommons.org/licenses/pddl/1.0/
License information was derived automatically
Dataset Overview: This dataset pertains to the examination results of students who participated in a series of academic assessments at a fictitious educational institution named "University of Exampleville." The assessments were administered across various courses and academic levels, with a focus on evaluating students' performance in general management and domain-specific topics.
Columns: The dataset comprises 12 columns, each representing specific attributes and performance indicators of the students. These columns encompass information such as the students' names (which have been anonymized), their respective universities, academic program names (including BBA and MBA), specializations, the semester of the assessment, the type of examination domain (general management or domain-specific), general management scores (out of 50), domain-specific scores (out of 50), total scores (out of 100), student ranks, and percentiles.
Data Collection: The examination data was collected during a standardized assessment process conducted by the University of Exampleville. The exams were designed to assess students' knowledge and skills in general management and their chosen domain-specific subjects. It involved students from both BBA and MBA programs who were in their final year of study.
Data Format: The dataset is available in a structured format, typically as a CSV file. Each row represents a unique student's performance in the examination, while columns contain specific information about their results and academic details.
Data Usage: This dataset is valuable for analyzing and gaining insights into the academic performance of students pursuing BBA and MBA degrees. It can be used for various purposes, including statistical analysis, performance trend identification, program assessment, and comparison of scores across domains and specializations. Furthermore, it can be employed in predictive modeling or decision-making related to curriculum development and student support.
Data Quality: The dataset has undergone preprocessing and anonymization to protect the privacy of individual students. Nevertheless, it is essential to use the data responsibly and in compliance with relevant data protection regulations when conducting any analysis or research.
Data Format: The exam data is typically provided in a structured format, commonly as a CSV (Comma-Separated Values) file. Each row in the dataset represents a unique student's examination performance, and each column contains specific attributes and scores related to the examination. The CSV format allows for easy import and analysis using various data analysis tools and programming languages like Python, R, or spreadsheet software like Microsoft Excel.
Here's a column-wise description of the dataset:
Name OF THE STUDENT: The full name of the student who took the exam. (Anonymized)
UNIVERSITY: The university where the student is enrolled.
PROGRAM NAME: The name of the academic program in which the student is enrolled (BBA or MBA).
Specialization: If applicable, the specific area of specialization or major that the student has chosen within their program.
Semester: The semester or academic term in which the student took the exam.
Domain: Indicates whether the exam was divided into two parts: general management and domain-specific.
GENERAL MANAGEMENT SCORE (OUT of 50): The score obtained by the student in the general management part of the exam, out of a maximum possible score of 50.
Domain-Specific Score (Out of 50): The score obtained by the student in the domain-specific part of the exam, also out of a maximum possible score of 50.
TOTAL SCORE (OUT of 100): The total score obtained by adding the scores from the general management and domain-specific parts, out of a maximum possible score of 100.
https://www.icpsr.umich.edu/web/ICPSR/studies/3401/termshttps://www.icpsr.umich.edu/web/ICPSR/studies/3401/terms
The Substance Abuse Treatment Cost Allocation and Analysis Template (SATCAAT) is a unit cost protocol based on rigorous cost accounting methods and standards for collecting substance abuse treatment cost data. This protocol provides a uniform accounting system for treatment providers that ultimately translates costs by category into costs by unit of service. Each treatment provider may include up to eight service delivery units (SDUs), which are defined as a single treatment modality delivered at a single geographic site. Data are entered into a series of spreadsheets within the template, beginning with the conversion of the provider's financial accounting reports into the Center for Substance Abuse Treatment's (CSAT's) chart of accounts structure and continuing through the allocation of costs using a step-down method of cost allocation. The allocation of costs will produce a "cost profile" of average cost per client by unit of service for each SDU. This data collection includes data from a purposive sample. These data are useful for examining patterns of service unit costs across the sampled SDUs. However, generalization to other providers or provider types is not possible. This release includes two files: (1) the SDU summary file for residential women and children (RWC) (78 records), and (2) cost data for all SDUS (213 records). The SDU summary file for RWC includes four SDU types: (1) residential pregnant and postpartum women, (2) residential long-term pregnant and postpartum women, (3) residential short-term pregnant and postpartum women, and (4) residential women and children. The SATCAAT Study includes data for multiple years for some SDUs. Each year of data for each SDU constitutes one record. SATCAAT captures costs for 14 different services: initial assessment, medical exams, project evaluation, psychosocial evaluation, individual counseling, group counseling, HIV testing and counseling, medical and diagnostic services, housing and meals, clinical case management, networking and outreach, client transportation, client education, and staff education. The focus of the SDUs in the cost data file were (1) Aftercare, (2) Children, (3) Detox, (4) HIV, (5) Outpatient, (6) Residential, and (7) Women. This file includes 27 different SDU types based on the service delivery design of the sampled treatment providers (e.g., aftercare -- women only, outpatient aftercare -- adult, outpatient aftercare -- adolescent).
Mineralogy data collected from the CEAMARC-CASO voyage of the Aurora Australis during the 2007-2008 summer season. The data consist of a large number of images, plus documents detailing analysis methods and file descriptions.
Taken from the "Methods" document in the download file:
CEAMARC MINERALOGY METHODS Margaret Lindsay August 2009
Mineralogy sampling method: (numbers in brackets refer to image below) Individual bags containing the samples taken during the CEAMARC 2007/08 voyage (1) were emptied in to a sorting tray and slightly defrosted to enable the biota to be separated and sorted in to like biota (2). Taxonomic samples were selected to represent different species. The taxonomy sample was moved onto the bench and allocated a STD barcode, a photo was taken (3) and the image number, barcode and 'identification' of the biota was recorded. From the taxonomy sample a small (larger than 0.05g) sample of the individual was dissected, weighed (4) and bagged separately, this sub-sample became the 'mineralogy sample' that were sent to Damien Gore at Macquarie University on 21/05/2009 for mineralogy analysis by Damien Gore and Peter Johnston.
Samples were tracked using the Sample Tracking Database (located \aad.gov.au\files\HIRP ew-shared-hirp\30 Samples tracking + LIMS (Lab Inf Management Sys)\Sample Tracking Database\HIRP STD Working). The key barcodes are: The nally bin's containing the CEAMARC samples are located in reefer 1 (-20 C) (barcode 11919). The original CEAMARC samples (parent container) are in nally bins 14762 and 14759. The taxonomy samples are in a nally barcoded as 70469 (contains 10 bags). The mineralogy samples are in a nally bin barcoded 70472 (contains three bags) and are currently at Macquarie University for mineralogy analysis.
Data was entered during the lab process into the spreadsheet file - Sub sampling taxonomy and mineralogy.xls the details of the spreadsheets contents;
The list below describes each column in the 'Taxonomy and Mineralogy', 'bamboo coral' and 'other analyses' sheets from the excel file - Sub sampling taxonomy and mineralogy.xls (location described in G:\CEAMARC\CEAMARC MINERALOGY FILE DESCRIPTIONS.doc)
Date sampled Date that the taxonomic samples were dissected to obtain the mineralogy samples
Parent barcode STD barcode for the nally bin that the samples are located in
Site barcode STD barcode for the CEAMARC site and deployment
CEAMARC site number CEAMARC voyage sample site number
CEAMARC event number The CEAMARC voyage event number is the sampling devices deployment number, related to CEAMARC site number
Taxonomy bag barcode STD barcode for the bag that contains the taxonomy samples
Image number The image number of the taxonomy sample in it's entirety before dissected to obtain the mineralogy sample. Image contains the label from the initial sample and the sub sample barcode (for taxonomy)
Sub sample barcode (for taxonomy) The STD barcode allocated to the taxonomy sample
Analyses label for mineralogy The number (identical to sub sample barcode number) that identifies the mineralogy sample and links it back to the taxonomic sample.
Analysis sample weight The weight in grams of the dissected part that is the mineralogy sample.
Mineralogy bag barcode STD barcode for the bag that contains the mineralogy samples
Identification Biota sample identification eg. Gorgonian, bryozoan, ophiuroids
Mineralogy sample size Relative size of sample sent off for mineralogy analysis; small sample, medium sample or large sample.
Taxonomy sample size Relative size of sample small sample; medium sample or large sample (suitable for further analysis).
The 'KRILL' sheet in the above excel file has the following columns;
Date sub sampled Date that the taxonomic samples were dissected to obtain the mineralogy samples
Sample details Sample code used to label the krill sample
Taxonomy bag barcode STD barcode for the bag that contains the taxonomy samples
Image number The image number of the taxonomy sample in it's entirety before dissected to obtain the mineralogy sample. Image contains the label from the initial sample and the sub sample barcode (for taxonomy)
Sub sample barcode (for taxonomy) The STD barcode allocated to the taxonomy sample
Analyses label for mineralogy The number (identical to sub sample barcode number) that identifies the mineralogy sample and links it back to the taxonomic sample.
Analysis sample weight The weight in grams of the dissected part that is the mineralogy sample.
Mineralogy bag barcode STD barcode for the bag that contains the mineralogy samples
Identification Biota sample identification eg. Gorgonian, bryozoan, ophiuroids
Mineralogy sample size Relative size of sample sent off for mineralogy analysis; small sample, medium sample or large sample.
Taxonomy sample size Relative size of sample small sample; medium sample or large sample (suitable for further analysis).
Voyage The ANARE Voyage number and year is expressed as V4 02/03
Station Station number that the samples were obtained from
Date Date that the samples were taken during the voyage
Time Time that the samples were taken during the voyage
Location Location that the samples were taken from during the voyage
Net The RMT 8 and 1 were used to collect the krill
Depth The depth that the samples were obtained from (25 meters)
Total mineralogy samples 1033 mineralogy samples + 15 bamboo coral samples (+ 12 krill samples) = 1060 samples
https://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy
Market Overview: The global spreadsheet editor market is projected to reach a value of XXX million by 2033, growing at a CAGR of XX% during the forecast period (2025-2033). The market is driven by the increasing adoption of digital tools for data management, collaboration, and analysis. The rise of cloud computing and the integration of AI and machine learning capabilities into spreadsheet software are further fueling market growth. Key trends in the market include the increasing popularity of free and open-source spreadsheet editors, the adoption of spreadsheet software by SMBs, and the integration of spreadsheet software with other business applications. Competitive Landscape: The spreadsheet editor market is dominated by a few major players, including Microsoft, Google, Apple, and Apache Software Foundation. Microsoft Excel is the market leader, with a significant share of the market. Other major players include Google Sheets, Apple Numbers, and Apache OpenOffice Calc. The market is also witnessing the emergence of new players offering innovative spreadsheet solutions. For example, Ragic provides a cloud-based spreadsheet software with advanced collaboration features, while Spreadsheetsoftware offers a spreadsheet editor specifically designed for financial modeling. The competitive landscape is expected to intensify in the coming years as more players enter the market with differentiated offerings.
The latest estimates from the 2010/11 Taking Part adult survey produced by DCMS were released on 30 June 2011 according to the arrangements approved by the UK Statistics Authority.
30 June 2011
**
April 2010 to April 2011
**
National and Regional level data for England.
**
Further analysis of the 2010/11 adult dataset and data for child participation will be published on 18 August 2011.
The latest data from the 2010/11 Taking Part survey provides reliable national estimates of adult engagement with sport, libraries, the arts, heritage and museums & galleries. This release also presents analysis on volunteering and digital participation in our sectors and a look at cycling and swimming proficiency in England. The Taking Part survey is a continuous annual survey of adults and children living in private households in England, and carries the National Statistics badge, meaning that it meets the highest standards of statistical quality.
These spreadsheets contain the data and sample sizes for each sector included in the survey:
The previous Taking Part release was published on 31 March 2011 and can be found online.
This release is published in accordance with the Code of Practice for Official Statistics (2009), as produced by the http://www.statisticsauthority.gov.uk/" class="govuk-link">UK Statistics Authority (UKSA). The UKSA has the overall objective of promoting and safeguarding the production and publication of official statistics that serve the public good. It monitors and reports on all official statistics, and promotes good practice in this area.
The document below contains a list of Ministers and Officials who have received privileged early access to this release of Taking Part data. In line with best practice, the list has been kept to a minimum and those given access for briefing purposes had a maximum of 24 hours.
The responsible statistician for this release is Neil Wilson. For any queries please contact the Taking Part team on 020 7211 6968 or takingpart@culture.gsi.gov.uk.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
These are a collection of XLSX sheets containing some of my favorite Excel tricks to reformat data to make analysis easier. I often use these to reformat column formatted data into plate layout or vice versa to better visualize and understand my data.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
A dataset of synchrotron X-ray diffraction (SXRD) analysis files, recording the refinement of crystallographic texture from a number of Ti-6Al-4V (Ti-64) sample matrices, containing a total of 93 hot-rolled samples, from three different orthogonal sample directions. The aim of the work was to accurately quantify bulk macro-texture for both the α (hexagonal close packed, hcp) and β (body-centred cubic, bcc) phases across a range of different processing conditions.
Material
Prior to the experiment, the Ti-64 materials had been hot-rolled at a range of different temperatures, and to different reductions, followed by air-cooling, using a rolling mill at The University of Manchester. Rectangular specimens (6 mm x 5 mm x 2 mm) were then machined from the centre of these rolled blocks, and from the starting material. The samples were cut along different orthogonal rolling directions and are referenced according to alignment of the rolling directions (RD – rolling direction, TD – transverse direction, ND – normal direction) with the long horizontal (X) axis and short vertical (Y) axis of the rectangular specimens. Samples of the same orientation were glued together to form matrices for the synchrotron analysis. The material, rolling conditions, sample orientations and experiment reference numbers used for the synchrotron diffraction analysis are included in the data as an excel spreadsheet.
SXRD Data Collection
Data was recorded using a high energy 90 keV synchrotron X-ray beam and a 5 second exposure at the detector for each measurement point. The slits were adjusted to give a 0.5 x 0.5 mm beam area, chosen to optimally resolve both the α and β phase peaks. The SXRD data was recorded by stage-scanning the beam in sequential X-Y positions at 0.5 mm increments across the rectangular sample matrices, containing a number of samples glued together, to analyse a total of 93 samples from the different processing conditions and orientations. Post-processing of the data was then used to sort the data into a rectangular grid of measurement points from each individual sample.
Diffraction Pattern Averaging
The stage-scan diffraction pattern images from each matrix were sorted into individual samples, and the images averaged together for each specimen, using a Python notebook sxrd-tiff-summer. The averaged .tiff images each capture average diffraction peak intensities from an area of about 30 mm2 (equivalent to a total volume of ~ 60 mm3), with three different sample orientations then used to calculate the bulk crystallographic texture from each rolling condition.
SXRD Data Analysis
A new Fourier-based peak fitting method from the Continuous-Peak-Fit Python package was used to fit full diffraction pattern ring intensities, using a range of different lattice plane peaks for determining crystallographic texture in both the α and β phases. Bulk texture was calculated by combining the ring intensities from three different sample orientations.
A .poni calibration file was created using Dioptas, through a refinement matching peak intensities from a LaB6 or CeO2 standard diffraction pattern image. Two calibrations were needed as some of the data was collected in July 2022 and some of the data was collected in August 2022. Dioptas was then used to determine peak bounds in 2θ for characterising a total of 22 α and 4 β lattice plane rings from the averaged Ti-64 diffraction pattern images, which were recorded in a .py input script. Using these two inputs, Continuous-Peak-Fit automatically converts full diffraction pattern rings into profiles of intensity versus azimuthal angle, for each 2θ section, which can also include multiple overlapping α and β peaks.
The Continuous-Peak-Fit refinement can be launched in a notebook or from the terminal, to automatically calculate a full mathematical description, in the form of Fourier expansion terms, to match the intensity variation of each individual lattice plane ring. The results for peak position, intensity and half-width for all 22 α and 4 β lattice plane peaks were recorded at an azimuthal resolution of 1º and stored in a .fit output file. Details for setting up and running this analysis can be found in the continuous-peak-fit-analysis package. This package also includes a Python script for extracting lattice plane ring intensity distributions from the .fit files, matching the intensity values with spherical polar coordinates to parametrise the intensity distributions from each of the three different sample orientations, in the form of pole figures. The script can also be used to combine intensity distributions from different sample orientations. The final intensity variations are recorded for each of the lattice plane peaks as text files, which can be loaded into MTEX to plot and analyse both the α and β phase crystallographic texture.
Metadata
An accompanying YAML text file contains associated SXRD beamline metadata for each measurement. The raw data is in the form of synchrotron diffraction pattern .tiff images which were too large to upload to Zenodo and are instead stored on The University of Manchester's Research Database Storage (RDS) repository. The raw data can therefore be obtained by emailing the authors.
The material data folder documents the machining of the samples and the sample orientations.
The associated processing metadata for the Continuous-Peak-Fit analyses records information about the different packages used to process the data, along with details about the different files contained within this analysis dataset.
Excel spreadsheets by species (4 letter code is abbreviation for genus and species used in study, year 2010 or 2011 is year data collected, SH indicates data for Science Hub, date is date of file preparation). The data in a file are described in a read me file which is the first worksheet in each file. Each row in a species spreadsheet is for one plot (plant). The data themselves are in the data worksheet. One file includes a read me description of the column in the date set for chemical analysis. In this file one row is an herbicide treatment and sample for chemical analysis (if taken). This dataset is associated with the following publication: Olszyk , D., T. Pfleeger, T. Shiroyama, M. Blakely-Smith, E. Lee , and M. Plocher. Plant reproduction is altered by simulated herbicide drift toconstructed plant communities. ENVIRONMENTAL TOXICOLOGY AND CHEMISTRY. Society of Environmental Toxicology and Chemistry, Pensacola, FL, USA, 36(10): 2799-2813, (2017).