Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In the attached Excel file, "Example Student Data", there are 6 sheets. There are three sheets with sample datasets, one for each of the three different exercise protocols described. Additionally, there are three sheets with sample graphs created using one of the three datasets. · Sheets 1 and 2: This is an example of a dataset and graph created from an exercise protocol designed to stress the creatine phosphate system. Here, the subject was a track and field athlete who threw the shot put for the DeSales University track team. The NIRS monitor was placed on the right triceps muscle, and the student threw the shot put six times with a minute rest in between throws. Data was collected telemetrically by the NIRS device and then downloaded after the student had completed the protocol. · Sheets 3 and 4: This is an example of a dataset and graph created from an exercise protocol designed to stress the glycolytic energy system. In this example, the subject performed continuous squat jumps for 30 seconds, followed by a 90 second rest period, for a total of three exercise bouts. The NIRS monitor was place on the left gastrocnemius muscle. Here again, data was collected telemetrically by the NIRS device and then downloaded after he had completed the protocol. · Sheets 5 and 6: In this example, the dataset and graph are from an exercise protocol designed to stress the oxidative system. Here, the student held a light-intensity, isometric biceps contraction (pushing against a table). The NIRS monitor was attached to the left biceps muscle belly. Here, data was collected by a student observing the SmO2 values displayed on a secondary device; specifically, a smartphone with the IPSensorMan APP displaying data. The recorder student observed and recorded the data on an Excel Spreadsheet, and marked the times that exercise began and ended on the Spreadsheet.
CSVs with more than 1 million rows can be viewed using add-ons to existing software, such as the Microsoft PowerPivot add-on for Excel, to handle larger data sets. The Microsoft PowerPivot add-on for Excel is available using the link in the 'Related Links' section below. Once PowerPivot has been installed, to load the large files, please follow the instructions below. Note that it may take at least 20 to 30 minutes to load one monthly file. Start Excel as normal
Excel spreadsheets by species (4 letter code is abbreviation for genus and species used in study, year 2010 or 2011 is year data collected, SH indicates data for Science Hub, date is date of file preparation). The data in a file are described in a read me file which is the first worksheet in each file. Each row in a species spreadsheet is for one plot (plant). The data themselves are in the data worksheet. One file includes a read me description of the column in the date set for chemical analysis. In this file one row is an herbicide treatment and sample for chemical analysis (if taken). This dataset is associated with the following publication: Olszyk , D., T. Pfleeger, T. Shiroyama, M. Blakely-Smith, E. Lee , and M. Plocher. Plant reproduction is altered by simulated herbicide drift toconstructed plant communities. ENVIRONMENTAL TOXICOLOGY AND CHEMISTRY. Society of Environmental Toxicology and Chemistry, Pensacola, FL, USA, 36(10): 2799-2813, (2017).
This dataset contains the valuation template the researcher can use to retrieve real-time Excel stock price and stock price in Google Sheets. The dataset is provided by Finsheet, the leading financial data provider for spreadsheet users. To get more financial data, visit the website and explore their function. For instance, if a researcher would like to get the last 30 years of income statement for Meta Platform Inc, the syntax would be =FS_EquityFullFinancials("FB", "ic", "FY", 30) In addition, this syntax will return the latest stock price for Caterpillar Inc right in your spreadsheet. =FS_Latest("CAT") If you need assistance with any of the function, feel free to reach out to their customer support team. To get starter, install their Excel and Google Sheets add-on.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Context
The dataset presents the distribution of median household income among distinct age brackets of householders in Excel. Based on the latest 2019-2023 5-Year Estimates from the American Community Survey, it displays how income varies among householders of different ages in Excel. It showcases how household incomes typically rise as the head of the household gets older. The dataset can be utilized to gain insights into age-based household income trends and explore the variations in incomes across households.
Key observations: Insights from 2023
In terms of income distribution across age cohorts, in Excel, where there exist only two delineated age groups, the median household income is $83,750 for householders within the 25 to 44 years age group, compared to $58,958 for the 65 years and over age group.
When available, the data consists of estimates from the U.S. Census Bureau American Community Survey (ACS) 2019-2023 5-Year Estimates. All incomes have been adjusting for inflation and are presented in 2023-inflation-adjusted dollars.
Age groups classifications include:
Variables / Data Columns
Good to know
Margin of Error
Data in the dataset are based on the estimates and are subject to sampling variability and thus a margin of error. Neilsberg Research recommends using caution when presening these estimates in your research.
Custom data
If you do need custom data for any of your research project, report or presentation, you can contact our research staff at research@neilsberg.com for a feasibility of a custom tabulation on a fee-for-service basis.
Neilsberg Research Team curates, analyze and publishes demographics and economic data from a variety of public and proprietary sources, each of which often includes multiple surveys and programs. The large majority of Neilsberg Research aggregated datasets and insights is made available for free download at https://www.neilsberg.com/research/.
This dataset is a part of the main dataset for Excel median household income by age. You can refer the same here
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In "Sample Student Data", there are 6 sheets. There are three sheets with sample datasets, one for each of the three different exercise protocols described (CrP Sample Dataset, Glycolytic Dataset, Oxidative Dataset). Additionally, there are three sheets with sample graphs created using one of the three datasets (CrP Sample Graph, Glycolytic Graph, Oxidative Graph). Each dataset and graph pairs are from different subjects. · CrP Sample Dataset and CrP Sample Graph: This is an example of a dataset and graph created from an exercise protocol designed to stress the creatine phosphate system. Here, the subject was a track and field athlete who threw the shot put for the DeSales University track team. The NIRS monitor was placed on the right triceps muscle, and the student threw the shot put six times with a minute rest in between throws. Data was collected telemetrically by the NIRS device and then downloaded after the student had completed the protocol. · Glycolytic Dataset and Glycolytic Graph: This is an example of a dataset and graph created from an exercise protocol designed to stress the glycolytic energy system. In this example, the subject performed continuous squat jumps for 30 seconds, followed by a 90 second rest period, for a total of three exercise bouts. The NIRS monitor was place on the left gastrocnemius muscle. Here again, data was collected telemetrically by the NIRS device and then downloaded after he had completed the protocol. · Oxidative Dataset and Oxidative Graph: In this example, the dataset and graph are from an exercise protocol designed to stress the oxidative system. Here, the student held a sustained, light-intensity, isometric biceps contraction (pushing against a table). The NIRS monitor was attached to the left biceps muscle belly. Here, data was collected by a student observing the SmO2 values displayed on a secondary device; specifically, a smartphone with the IPSensorMan APP displaying data. The recorder student observed and recorded the data on an Excel Spreadsheet, and marked the times that exercise began and ended on the Spreadsheet.
About Dataset The dataset contains information about sales transactions, including details such as the customer's age, gender, location, and the products sold. The dataset includes data on both the cost of the product and the revenue generated from its sale, allowing for calculations of profit and profit margins. The dataset includes information on customer age and gender, which could be used to analyze purchasing behavior across different demographic groups. The dataset likely includes both numeric and categorical data, which would require different types of analysis and visualization techniques. Overall, the dataset appears to provide a comprehensive view of sales transactions, with the potential for analysis at multiple levels, including by product, customer, and location. But it does not contain any useful information or insights for decision makers. - After understanding the dataset. - I cleaned it and add some columns & calculations like (Net profit, Age Status). - Making a model in Power Pivot, calculate some measures like (Total profit, COGS, Total revenues) and Making KPIS Model. - Then asked some questions: About Distribution What are the total revenues and profits? What is the best-selling country in terms of revenue? What are the five best-selling states in terms of revenue? What are the five lowest-selling states in terms of revenues? What is the position of age in relation to revenues? About profitability What are the total revenues and profits? Monthly position in terms of revenues and profits? Months position in terms of COGS? What are the top category-selling in terms of revenues & Profit? What are the three best-selling sub-category in terms of profit? About KPIS Explain to me each salesperson's position in terms of Target
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
1.Introduction
Sales data collection is a crucial aspect of any manufacturing industry as it provides valuable insights about the performance of products, customer behaviour, and market trends. By gathering and analysing this data, manufacturers can make informed decisions about product development, pricing, and marketing strategies in Internet of Things (IoT) business environments like the dairy supply chain.
One of the most important benefits of the sales data collection process is that it allows manufacturers to identify their most successful products and target their efforts towards those areas. For example, if a manufacturer could notice that a particular product is selling well in a certain region, this information could be utilised to develop new products, optimise the supply chain or improve existing ones to meet the changing needs of customers.
This dataset includes information about 7 of MEVGAL’s products [1]. According to the above information the data published will help researchers to understand the dynamics of the dairy market and its consumption patterns, which is creating the fertile ground for synergies between academia and industry and eventually help the industry in making informed decisions regarding product development, pricing and market strategies in the IoT playground. The use of this dataset could also aim to understand the impact of various external factors on the dairy market such as the economic, environmental, and technological factors. It could help in understanding the current state of the dairy industry and identifying potential opportunities for growth and development.
Please cite the following papers when using this dataset:
I. Siniosoglou, K. Xouveroudis, V. Argyriou, T. Lagkas, S. K. Goudos, K. E. Psannis and P. Sarigiannidis, "Evaluating the Effect of Volatile Federated Timeseries on Modern DNNs: Attention over Long/Short Memory," in the 12th International Conference on Circuits and Systems Technologies (MOCAST 2023), April 2023, Accepted
The dataset includes data regarding the daily sales of a series of dairy product codes offered by MEVGAL. In particular, the dataset includes information gathered by the logistics division and agencies within the industrial infrastructures overseeing the production of each product code. The products included in this dataset represent the daily sales and logistics of a variety of yogurt-based stock. Each of the different files include the logistics for that product on a daily basis for three years, from 2020 to 2022.
3.1 Data Collection
The process of building this dataset involves several steps to ensure that the data is accurate, comprehensive and relevant.
The first step is to determine the specific data that is needed to support the business objectives of the industry, i.e., in this publication’s case the daily sales data.
Once the data requirements have been identified, the next step is to implement an effective sales data collection method. In MEVGAL’s case this is conducted through direct communication and reports generated each day by representatives & selling points.
It is also important for MEVGAL to ensure that the data collection process conducted is in an ethical and compliant manner, adhering to data privacy laws and regulation. The industry also has a data management plan in place to ensure that the data is securely stored and protected from unauthorised access.
The published dataset is consisted of 13 features providing information about the date and the number of products that have been sold. Finally, the dataset was anonymised in consideration to the privacy requirement of the data owner (MEVGAL).
File
Period
Number of Samples (days)
product 1 2020.xlsx
01/01/2020–31/12/2020
363
product 1 2021.xlsx
01/01/2021–31/12/2021
364
product 1 2022.xlsx
01/01/2022–31/12/2022
365
product 2 2020.xlsx
01/01/2020–31/12/2020
363
product 2 2021.xlsx
01/01/2021–31/12/2021
364
product 2 2022.xlsx
01/01/2022–31/12/2022
365
product 3 2020.xlsx
01/01/2020–31/12/2020
363
product 3 2021.xlsx
01/01/2021–31/12/2021
364
product 3 2022.xlsx
01/01/2022–31/12/2022
365
product 4 2020.xlsx
01/01/2020–31/12/2020
363
product 4 2021.xlsx
01/01/2021–31/12/2021
364
product 4 2022.xlsx
01/01/2022–31/12/2022
364
product 5 2020.xlsx
01/01/2020–31/12/2020
363
product 5 2021.xlsx
01/01/2021–31/12/2021
364
product 5 2022.xlsx
01/01/2022–31/12/2022
365
product 6 2020.xlsx
01/01/2020–31/12/2020
362
product 6 2021.xlsx
01/01/2021–31/12/2021
364
product 6 2022.xlsx
01/01/2022–31/12/2022
365
product 7 2020.xlsx
01/01/2020–31/12/2020
362
product 7 2021.xlsx
01/01/2021–31/12/2021
364
product 7 2022.xlsx
01/01/2022–31/12/2022
365
3.2 Dataset Overview
The following table enumerates and explains the features included across all of the included files.
Feature
Description
Unit
Day
day of the month
-
Month
Month
-
Year
Year
-
daily_unit_sales
Daily sales - the amount of products, measured in units, that during that specific day were sold
units
previous_year_daily_unit_sales
Previous Year’s sales - the amount of products, measured in units, that during that specific day were sold the previous year
units
percentage_difference_daily_unit_sales
The percentage difference between the two above values
%
daily_unit_sales_kg
The amount of products, measured in kilograms, that during that specific day were sold
kg
previous_year_daily_unit_sales_kg
Previous Year’s sales - the amount of products, measured in kilograms, that during that specific day were sold, the previous year
kg
percentage_difference_daily_unit_sales_kg
The percentage difference between the two above values
kg
daily_unit_returns_kg
The percentage of the products that were shipped to selling points and were returned
%
previous_year_daily_unit_returns_kg
The percentage of the products that were shipped to selling points and were returned the previous year
%
points_of_distribution
The amount of sales representatives through which the product was sold to the market for this year
previous_year_points_of_distribution
The amount of sales representatives through which the product was sold to the market for the same day for the previous year
Table 1 – Dataset Feature Description
4.1 Dataset Structure
The provided dataset has the following structure:
Where:
Name
Type
Property
Readme.docx
Report
A File that contains the documentation of the Dataset.
product X
Folder
A folder containing the data of a product X.
product X YYYY.xlsx
Data file
An excel file containing the sales data of product X for year YYYY.
Table 2 - Dataset File Description
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 957406 (TERMINET).
References
[1] MEVGAL is a Greek dairy production company
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Context
The dataset presents the detailed breakdown of the count of individuals within distinct income brackets, categorizing them by gender (men and women) and employment type - full-time (FT) and part-time (PT), offering valuable insights into the diverse income landscapes within Excel. The dataset can be utilized to gain insights into gender-based income distribution within the Excel population, aiding in data analysis and decision-making..
Key observations
When available, the data consists of estimates from the U.S. Census Bureau American Community Survey (ACS) 2019-2023 5-Year Estimates.
Income brackets:
Variables / Data Columns
Employment type classifications include:
Good to know
Margin of Error
Data in the dataset are based on the estimates and are subject to sampling variability and thus a margin of error. Neilsberg Research recommends using caution when presening these estimates in your research.
Custom data
If you do need custom data for any of your research project, report or presentation, you can contact our research staff at research@neilsberg.com for a feasibility of a custom tabulation on a fee-for-service basis.
Neilsberg Research Team curates, analyze and publishes demographics and economic data from a variety of public and proprietary sources, each of which often includes multiple surveys and programs. The large majority of Neilsberg Research aggregated datasets and insights is made available for free download at https://www.neilsberg.com/research/.
This dataset is a part of the main dataset for Excel median household income by race. You can refer the same here
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Context
The dataset presents median household incomes for various household sizes in Excel, AL, as reported by the U.S. Census Bureau. The dataset highlights the variation in median household income with the size of the family unit, offering valuable insights into economic trends and disparities within different household sizes, aiding in data analysis and decision-making.
Key observations
https://i.neilsberg.com/ch/excel-al-median-household-income-by-household-size.jpeg" alt="Excel, AL median household income, by household size (in 2022 inflation-adjusted dollars)">
When available, the data consists of estimates from the U.S. Census Bureau American Community Survey (ACS) 2017-2021 5-Year Estimates.
Household Sizes:
Variables / Data Columns
Good to know
Margin of Error
Data in the dataset are based on the estimates and are subject to sampling variability and thus a margin of error. Neilsberg Research recommends using caution when presening these estimates in your research.
Custom data
If you do need custom data for any of your research project, report or presentation, you can contact our research staff at research@neilsberg.com for a feasibility of a custom tabulation on a fee-for-service basis.
Neilsberg Research Team curates, analyze and publishes demographics and economic data from a variety of public and proprietary sources, each of which often includes multiple surveys and programs. The large majority of Neilsberg Research aggregated datasets and insights is made available for free download at https://www.neilsberg.com/research/.
This dataset is a part of the main dataset for Excel median household income. You can refer the same here
https://opendata.nhsbsa.net/dataset/foi-01204 April 2023 https://opendata.nhsbsa.net/dataset/foi-01240 May 2023 https://opendata.nhsbsa.net/dataset/foi-01310 June 2023 https://opendata.nhsbsa.net/dataset/foi-01378 July 2023 FOI-01424 - Datasets - Open Data Portal BETA (nhsbsa.net) August 2023 https://opendata.nhsbsa.net/dataset/foi-01502 September 2023 https://opendata.nhsbsa.net/dataset/foi-01550 October 2023 https://opendata.nhsbsa.net/dataset/foi-01668 November 2023 https://opendata.nhsbsa.net/dataset/foi-01669 December 2023 https://opendata.nhsbsa.net/dataset/foi-01756 Some data sets are over 1 million rows of data and it may be that you will need to use add-ons already existing on Microsoft Excel to enable you to view the data set in its entirety. Microsoft PowerPivot add-on for Excel can be used to handle larger data sets. The Microsoft PowerPivot add-on for Excel is available using the link in the 'Related Links' section below: https://www.microsoft.com/en-us/download/details.aspx?id=43348 Once PowerPivot has been installed, to load the large files, please follow the instructions below: 1. Start Excel as normal 2. Click on the PowerPivot tab 3. Click on the PowerPivot Window icon (top left) 4. In the PowerPivot Window, click on the "From Other Sources" icon 5. In the Table Import Wizard e.g. scroll to the bottom and select Text File 6. Browse to the file you want to open and choose the file extension you require e.g. CSV
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The data includs the distance, temperature, and Redshift of 93,060 nearby space objects, including stars, quasars, white dwarfs, and carbon stars. The objects' temperatures are between 671 and 99,575 K, and the distances of the objects are between 413.13 and 0.5 (mas). We have retrieved this information from almost 2,200,000 records. In addition, we have added two new columns for providing equivalent distances in the light year and peak frequency of the black body. We have excluded data from space objects whose temperature doesn’t exist and space objects whose Redshift is less than zero (Blueshift). All data are in a simple table in a Microsoft Access Database. Also, a copy of the data is represented in an excel file. A text file includes the basic script for downloading data.
For ethic add Ethics statements and Acknowledgments
Acknowledgments This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France 2000,A&AS,143,9 , "The SIMBAD astronomical database", Wenger et al.
This page provides data for the 3rd Grade Reading Level Proficiency performance measure.The dataset includes the student performance results on the English/Language Arts section of the AzMERIT from the Fall 2017 and Spring 2018. Data is representive of students in third grade in public elementary schools in Tempe. This includes schools from both Tempe Elementary and Kyrene districts. Results are by school and provide the total number of students tested, total percentage passing and percentage of students scoring at each of the four levels of proficiency. The performance measure dashboard is available at 3.07 3rd Grade Reading Level Proficiency.Additional InformationSource: Arizona Department of EducationContact: Ann Lynn DiDomenicoContact E-Mail: Ann_DiDomenico@tempe.govData Source Type: Excel/ CSVPreparation Method: Filters on original dataset: within "Schools" Tab School District [select Tempe School District and Kyrene School District]; School Name [deselect Kyrene SD not in Tempe city limits]; Content Area [select English Language Arts]; Test Level [select Grade 3]; Subgroup/Ethnicity [select All Students] Remove irrelevant fields; Add Fiscal YearPublish Frequency: Annually as data becomes availablePublish Method: ManualData Dictionary
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The various performance criteria applied in this analysis include the probability of reaching the ultimate target, the costs, elapsed times and system vulnerability resulting from any intrusion. This Excel file contains all the logical, probabilistic and statistical data entered by a user, and required for the evaluation of the criteria. It also reports the results of all the computations.
The Home Office has changed the format of the published data tables for a number of areas (asylum and resettlement, entry clearance visas, extensions, citizenship, returns, detention, and sponsorship). These now include summary tables, and more detailed datasets (available on a separate page, link below). A list of all available datasets on a given topic can be found in the ‘Contents’ sheet in the ‘summary’ tables. Information on where to find historic data in the ‘old’ format is in the ‘Notes’ page of the ‘summary’ tables.
The Home Office intends to make these changes in other areas in the coming publications. If you have any feedback, please email MigrationStatsEnquiries@homeoffice.gov.uk.
Immigration statistics, year ending September 2020
Immigration Statistics Quarterly Release
Immigration Statistics User Guide
Publishing detailed data tables in migration statistics
Policy and legislative changes affecting migration to the UK: timeline
Immigration statistics data archives
https://assets.publishing.service.gov.uk/media/602bab69e90e070562513e35/asylum-summary-dec-2020-tables.xlsx">Asylum and resettlement summary tables, year ending December 2020 (MS Excel Spreadsheet, 359 KB)
Detailed asylum and resettlement datasets
https://assets.publishing.service.gov.uk/media/602bab8fe90e070552b33515/sponsorship-summary-dec-2020-tables.xlsx">Sponsorship summary tables, year ending December 2020 (MS Excel Spreadsheet, 67.7 KB)
https://assets.publishing.service.gov.uk/media/602bf8708fa8f50384219401/visas-summary-dec-2020-tables.xlsx">Entry clearance visas summary tables, year ending December 2020 (MS Excel Spreadsheet, 70.3 KB)
Detailed entry clearance visas datasets
https://assets.publishing.service.gov.uk/media/602bac148fa8f5037f5d849c/passenger-arrivals-admissions-summary-dec-2020-tables.xlsx">Passenger arrivals (admissions) summary tables, year ending December 2020 (MS Excel Spreadsheet, 70.6 KB)
Detailed Passengers initially refused entry at port datasets
https://assets.publishing.service.gov.uk/media/602bac3d8fa8f50383c41f7c/extentions-summary-dec-2020-tables.xlsx">Extensions summary tables, year ending December 2020 (MS Excel Spreadsheet, 41.5 KB)
<a href="https://www.gov.uk/governmen
The excelforms extension for CKAN provides a mechanism for users to input data into Table Designer tables using Excel-based forms, enhancing data entry efficiency. This extension focuses on streamlining the process of adding data rows to tables within CKAN's Table Designer. A key component of the functionality is the ability to import multiple rows in a single operation, which significant reduces overhead associated with entering multiple data points. Key Features: Excel-Based Forms: Users can enter data using familiar Excel spreadsheets, leveraging their existing skills and software. Table Designer Integration: Designed to work seamlessly with CKAN's Table Designer, extending its functionality to include Excel-based data entry. Multiple Row Import: Supports importing multiple rows of data at once, improving data entry efficiency, especially when dealing with large datasets. Data mapping: Simplifies the process of aligning excel column headers to their corresponding data fields in tables. Improved Data Entry Speed: Provides an alternative to manual data entry, resulting in faster population and easier updates. Technical Integration: The excelforms extension integrates with CKAN by introducing new functionalities and workflows around the Table Designer plugin. The installation instructions specify that this plugin to be added before the tabledesigner plugin. Benefits & Impact: By enabling Excel-based data entry, the excelforms extension improves the user experience for those familiar with spreadsheet software. The ability to import multiple rows simultaneously significantly reduces the time and effort required to populate tables, particularly when dealing with large amounts of data. The impact is better data accessibility through the streamlining of data population workflows.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Context
The dataset presents the distribution of median household income among distinct age brackets of householders in Excel. Based on the latest 2017-2021 5-Year Estimates from the American Community Survey, it displays how income varies among householders of different ages in Excel. It showcases how household incomes typically rise as the head of the household gets older. The dataset can be utilized to gain insights into age-based household income trends and explore the variations in incomes across households.
Key observations: Insights from 2021
In terms of income distribution across age cohorts, Excel only reports a median household income of $101,336 among householders in the 25 to 44 years age group.
When available, the data consists of estimates from the U.S. Census Bureau American Community Survey (ACS) 2017-2021 5-Year Estimates. All incomes have been adjusting for inflation and are presented in 2022-inflation-adjusted dollars.
Age groups classifications include:
Variables / Data Columns
Good to know
Margin of Error
Data in the dataset are based on the estimates and are subject to sampling variability and thus a margin of error. Neilsberg Research recommends using caution when presening these estimates in your research.
Custom data
If you do need custom data for any of your research project, report or presentation, you can contact our research staff at research@neilsberg.com for a feasibility of a custom tabulation on a fee-for-service basis.
Neilsberg Research Team curates, analyze and publishes demographics and economic data from a variety of public and proprietary sources, each of which often includes multiple surveys and programs. The large majority of Neilsberg Research aggregated datasets and insights is made available for free download at https://www.neilsberg.com/research/.
This dataset is a part of the main dataset for Excel median household income by age. You can refer the same here
This workflow aims to streamline the integration of phytosociological inventory data stored in Excel format into a MongoDB database. This process is essential for the project's Virtual Research Environment (VRE), facilitating comprehensive data analysis. Key components include converting Excel files to JSON format, checking for duplicate inventories to ensure data integrity, and uploading the JSON files to the database. This workflow promotes a reliable, robust dataset for further exploration and utilization within the VRE, enhancing the project's inventory database. Background Efficient data management in phytosociological inventories requires seamless integration of inventory data. This workflow facilitates the importation of phytosociological inventories in Excel format into the MongoDB database, connected to the project's Virtual Research Environment (VRE). The workflow comprises two components: converting Excel to JSON and checking for inventory duplicates, ultimately enhancing the inventory database. Introduction Phytosociological inventories demand efficient data handling, especially concerning the integration of inventory data. This workflow focuses on the pivotal task of importing phytosociological inventories, stored in Excel format, into the MongoDB database. This process is integral to the VRE of the project, laying the groundwork for comprehensive data analysis. The workflow's primary goal is to ensure a smooth and duplicate-free integration, promoting a reliable dataset for further exploration and utilization within the project's VRE. Aims The primary aim of this workflow is to streamline the integration of phytosociological inventory data into the MongoDB database, ensuring a robust and duplicate-free dataset for further analysis within the project's VRE. To achieve this, the workflow includes the following key components: 1. Excel to JSON Conversion: Converts phytosociological inventories stored in Excel format to JSON, preparing the data for MongoDB compatibility. 2. Duplicate Check and Database Upload: Checks for duplicate inventories in the MongoDB database and uploads the JSON file, incrementing the inventory count in the database. Scientific Questions - Data Format Compatibility: How effectively does the workflow convert Excel-based phytosociological inventories to the JSON format for MongoDB integration? - Database Integrity Check: How successful is the duplicate check component in ensuring data integrity by identifying and handling duplicate inventories? - Inventory Count Increment: How does the workflow contribute to the increment of the inventory count in the MongoDB database, and how is this reflected in the overall project dataset?
https://digital.nhs.uk/about-nhs-digital/terms-and-conditionshttps://digital.nhs.uk/about-nhs-digital/terms-and-conditions
Warning: Large file size (over 1GB). Each monthly data set is large (over 4 million rows), but can be viewed in standard software such as Microsoft WordPad (save by right-clicking on the file name and selecting 'Save Target As', or equivalent on Mac OSX). It is then possible to select the required rows of data and copy and paste the information into another software application, such as a spreadsheet. Alternatively, add-ons to existing software, such as the Microsoft PowerPivot add-on for Excel, to handle larger data sets, can be used. The Microsoft PowerPivot add-on for Excel is available from Microsoft http://office.microsoft.com/en-gb/excel/download-power-pivot-HA101959985.aspx Once PowerPivot has been installed, to load the large files, please follow the instructions below. Note that it may take at least 20 to 30 minutes to load one monthly file. 1. Start Excel as normal 2. Click on the PowerPivot tab 3. Click on the PowerPivot Window icon (top left) 4. In the PowerPivot Window, click on the "From Other Sources" icon 5. In the Table Import Wizard e.g. scroll to the bottom and select Text File 6. Browse to the file you want to open and choose the file extension you require e.g. CSV Once the data has been imported you can view it in a spreadsheet. What does the data cover? General practice prescribing data is a list of all medicines, dressings and appliances that are prescribed and dispensed each month. A record will only be produced when this has occurred and there is no record for a zero total. For each practice in England, the following information is presented at presentation level for each medicine, dressing and appliance, (by presentation name): - the total number of items prescribed and dispensed - the total net ingredient cost - the total actual cost - the total quantity The data covers NHS prescriptions written in England and dispensed in the community in the UK. Prescriptions written in England but dispensed outside England are included. The data includes prescriptions written by GPs and other non-medical prescribers (such as nurses and pharmacists) who are attached to GP practices. GP practices are identified only by their national code, so an additional data file - linked to the first by the practice code - provides further detail in relation to the practice. Presentations are identified only by their BNF code, so an additional data file - linked to the first by the BNF code - provides the chemical name for that presentation.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset was derived by the Bioregional Assessment Programme. The parent datasets are identified in the Lineage statement in this metadata statement. The processes undertaken to produce this derived dataset are described in the History field in this metadata statement.
This dataset comprises of interpreted elevation surfaces and contours for the major Triassic and Upper Permian units of the Galilee Geological Basin.
This dataset was created to provide formation extents for aquifers in the Galilee geological basin
A Quality Assurance (QA) and validation process was conducted on the original well and bore data to choose wells/bores that are within 25 kilometres of the BA Galilee Region extent.
The QA/Validation process is as follows:
Well data
a. Obtained excel file "QPED_July_2013_galilee.xlsx" from GA
b. Based on stratigraphic information in "BH_costrat" tab formation names were regularised and simplified based on current naming conventions.
c. Simplified names added to QPED_July_2013_galileet.xlsx as "Steve_geo" and "Steve_group"
d. Produced new file "GSQ_Geology.xlsx" contained decimal latitude and longitude, KB elevation, top of unit in metres from KB, top of unit in metres AHD, bottom of unit in metres from KB, bottom of unit in metres AHD, original geology, simplified geology, simplified Group geology.
i. KB obtained from "BH_wellhist"
ii. Where no KB information was available ie KB=0, sample the 1S DEM at the well's location to obtain height. KB=DEM+10. Marked well as having lower reliability.
iii. Calculated Top_m_AHD = KB - Top_m_KB
iv. Calculated Bottom_m_AHD = KB - Bottom_m_KB
e. Brought GSQ_Geology.xlsx into ArcGIS
f. Selected wells based on "Steve_geo" field for each model layer to produce a geodatabase for each layer.
i. GSQ_basement_wells
ii. GSQ_top_joe_joe_group
iii. GSQ_top_bandanna_merge
iv. GSQ_rewan_group
v. GSQ_clematis
vi. GSQ_moolyember
g. Additional wells and reinterpreted tops added to appropriate geodatabase based on well completion reports
h. Additional wells added to coverages to help model building process
i. Well_name listed as Fake
ii. Exception being GSQ_top_basement_fake which was created as a separate geodatabase
Bore data
a. Obtained QLD_DNRM_GroundwaterDatabaseExtract_20131111 from GA
b. Used files REGISTRATIONS.txt, ELEVATIONS.txt and AQUIFER.txt to build GW_stratigraphy.xlsx
i. Based on RN
ii. Latitude from GIS_LAT (REGISTRATIONS.txt)
iii. Longitude from GIS_LNG (REGISTRATIONS.txt)
iv. Elevation from (ELEVATIONS.txt)
v. FORM_DESC from (AQUIFER.txt)
vi. Top from (AQUIFER.txt)
vii. Bottom from (AQUIFER.txt)
c. Brought GW_stratigraphy.xlsx into ArcGIS
d. Created gw_bores_galilee_dem
i. Sampled 1S DEM to obtain ground level elevation column RASTERVALU
ii. Created column top_m_AHD by RASTERVALU - Top
e. Selected bores based on "FORM_DESC" field for each model layer to produce a geodatabase for each layer.
i. Gw_basement
ii. GW_bores_joe_joe_group
iii. GW_bores_bandanna
iv. Gw_bores_rewan
v. Gw_bores_clematis
vi. Gw_bores_moolyember
Georectified seismic surfaces
a. Extracted interpreted seismic surfaces for base Permian (interpreted as basement) and top Bandanna (in time) from the following seismic surveys
i. Y80A, W81A, Carmichael, Pendine, T81A, Quilpie, Ward and Powell Creek seismic survey downloaded https://qdexguest.deedi.qld.gov.au/portal/site/qdex/search?searchType=general
ii. Brought TIF images into ArcGIS and georectified
iii. Digitised shape of contours and faults into geodatabase
1. Basement_contours and basement_faults
2. bandanna_contours_new_data and bandanna_faults
iv. Added field "contour" to geodatabase
v. Converted contours to depth in "contour" field based on well and bore data (top_m_AHD) and contour progression
vi. Use the shape and depth derived from OZ SEEBASE to help to add additional contours and faults to basement and bandanna datasets
Additional contour and fault surfaces were built derived from underlying surfaces and wells/bore data
a. Joejoe_contours and joejoe)faults
b. Rewan_contour_clip (used bandanna_faults as fault coverage)
c. Clematis_contour and clematis_faults
d. Moolyember_contour (used clematis_faults as fault coverage)
Surface geology
a. Extracted surface geology from QUEENSLAND GEOLOGY_AUGUST_2012 using Galilee BA region boundary with 25 kilometre boundary to form geodatabase QLD_geology_galilee
b. Selected relevant surface geology from QLD_geology_galilee based on field "Name" for each model layer and created new geodatabase layers
i. Basement_geology: Argentine Metamorphics,Running River Metamorphics,Charters Towers Metamorphics; Bimurra Volcanics, Foyle Volcanics, Mount Wyatt Formation, Saint Anns Formation, Silver Hills Volcanics, Stones Creek Volcanics; Bulliwallah Formation, Ducabrook Formation, Mount Rankin Formation, Natal Formation, Star of Hope Formation; Cape River Metamorphics; Einasleigh Metamorphics; Gem Park Granite; Macrossan Province Cambrian-Ordovician intrusives; Macrossan Province Ordovician-Silurian intrusives; Macrossan Province Ordovician intrusives; Mount Formartine, unnamed plutonic units; Pama Province Silurian-Devonian intrusives; Seventy Mile Range Group; and Kirk River beds, Les Jumelles beds.
ii. Joe_joe_geology: Joe Joe Group
iii. Galilee_permian_geology: Back Creek Group, Betts Creek Group, Blackwater Group
iv. Rewan_geology: Rewan Group
1. Later also made dunda_beds_geology to be included in Rewan model: Dunda beds
v. Clematis_geology: Clematis Group
1. Later also made warang_sandstone_geology to be included in Clematis model: Warang Sandstone
vi. Moolyember_surface_geology: Moolyember Formation
DEM for each model layer
a. Using surface geology geodatabase extent extract grid from dem_s_1s to represent the top of the model layer at the surface
i. Basement_dem
ii. Joejoe_dem
iii. Bandanna_dem
iv. Rewan_dem and dunda_dem
v. Clematis_dem and warang_dem
vi. Moolyember_surface_dem
b. Used Contour tool in ArcGIS to obtain a 25 metre contour geodatabase from the relevant model DEM
i. Basement_dem_contours
ii. Joejoe_dem_contours
iii. Bandanna_dem_contours
iv. Rewan_dem_contours and dunda_dem_contours
v. Clematis_dem_contours and warang_dem_contours
vi. Moolyember_dem_contours
c. For the purpose of guiding the model building process additional fields were added to each DEM contour geodatabase was added based on average thickness derived from groundwater bores and petroleum wells.
i. Basement_dem_contours: Joejoe, bandanna, rewan, clematis, moolyember
ii. Joejoe_dem_contours: basement, bandanna
iii. Bandanna_dem_contours: joejoe, rewan
iv. Rewan_dem_contours and dunda_dem_contours: clematis, rewan
v. Clematis_dem_contours and warang_dem_contours: moolyember, rewan
vi. Moolyember_dem_contours: clematis
The model building process is as follows:
Used the tope to raster tool to create surface based on the following rules
a. Environment
i. Extent
1. Top: -19.7012030024424
2. Right: 148.891511819054
3. Bottom: -27.5812030024424
4. Left: 139.141511819054
ii. Output cell size: 0.01 degrees
iii. Drainage enforcement: No_enforce
b. Input
i. Basement
1. Basement_dem_contour; field - contour; type - contour
2. Joejoe_dem_contour; field - basement; type - contour
3. Basement_contour; field - contour; type - contour
4. GSQ_basement_wells; field - top_m_AHD; type - point elevation
5. GW_basement; field - top_m_AHDl type - point elevation
6. GSQ_top_basement_fake; field - top_m_AHDl type - point elevation
7. Basement_faults; type - cliff
ii. Joe Joe Group
1. Joejoe_dem_contour; field - basement; type - contour
2. Basement_dem_contour; field - joejoe; type - contour
3. permian_dem_contour; field - joejoe, type - contour
4. joejoe_contour; field - joejoe; type - contour
5. GSQ_top_joejoe_group; field - top_m_AHD; type - point elevation
6. GW_bores_joe_joe_group; field - top_m_AHDl type - point elevation
7. joejoe_faults; type - cliff
iii. Bandanna Group
1. Permian_dem_contour; field - contour; type - contour
2. Joejoe_dem_contour; field - bandanna; type - contour
3. Rewan_dem_contour: field - bandanna; type - contour
4. Dunda_dem_contour; field - bandanna; type - contour
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In the attached Excel file, "Example Student Data", there are 6 sheets. There are three sheets with sample datasets, one for each of the three different exercise protocols described. Additionally, there are three sheets with sample graphs created using one of the three datasets. · Sheets 1 and 2: This is an example of a dataset and graph created from an exercise protocol designed to stress the creatine phosphate system. Here, the subject was a track and field athlete who threw the shot put for the DeSales University track team. The NIRS monitor was placed on the right triceps muscle, and the student threw the shot put six times with a minute rest in between throws. Data was collected telemetrically by the NIRS device and then downloaded after the student had completed the protocol. · Sheets 3 and 4: This is an example of a dataset and graph created from an exercise protocol designed to stress the glycolytic energy system. In this example, the subject performed continuous squat jumps for 30 seconds, followed by a 90 second rest period, for a total of three exercise bouts. The NIRS monitor was place on the left gastrocnemius muscle. Here again, data was collected telemetrically by the NIRS device and then downloaded after he had completed the protocol. · Sheets 5 and 6: In this example, the dataset and graph are from an exercise protocol designed to stress the oxidative system. Here, the student held a light-intensity, isometric biceps contraction (pushing against a table). The NIRS monitor was attached to the left biceps muscle belly. Here, data was collected by a student observing the SmO2 values displayed on a secondary device; specifically, a smartphone with the IPSensorMan APP displaying data. The recorder student observed and recorded the data on an Excel Spreadsheet, and marked the times that exercise began and ended on the Spreadsheet.