Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Example of how I use MS Excel's VLOOKUP() function to filter my data.
Analyzing sales data is essential for any business looking to make informed decisions and optimize its operations. In this project, we will utilize Microsoft Excel and Power Query to conduct a comprehensive analysis of Superstore sales data. Our primary objectives will be to establish meaningful connections between various data sheets, ensure data quality, and calculate critical metrics such as the Cost of Goods Sold (COGS) and discount values. Below are the key steps and elements of this analysis:
1- Data Import and Transformation:
2- Data Quality Assessment:
3- Calculating COGS:
4- Discount Analysis:
5- Sales Metrics:
6- Visualization:
7- Report Generation:
Throughout this analysis, the goal is to provide a clear and comprehensive understanding of the Superstore's sales performance. By using Excel and Power Query, we can efficiently manage and analyze the data, ensuring that the insights gained contribute to the store's growth and success.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The folder contains two subfolders: one with EyeLink 1000 Plus eye-tracking data, and the other with Tobii Nano Pro data. Each of these folders includes a file named "Gaze_position_raw", which is an Excel file containing all the raw data collected from participants. In this file, different trials are stored in separate sheets, and each sheet contains columns for the displayed target location and the corresponding gaze location.A separate folder called "Processed_data_EyeLink/Tobii" contains the results after performing a linear transformation. In this folder, each trial is saved as a separate Excel file. These files include the target location, the original gaze location, and the corrected gaze location after the corresponding linear transformation.
https://digital.nhs.uk/about-nhs-digital/terms-and-conditionshttps://digital.nhs.uk/about-nhs-digital/terms-and-conditions
Contains monthly data from the Assuring Transformation dataset. Data is available in Excel or CSV format.
The documentation below is in reference to this items placement in the NM Supply Chain Data Hub. The documentation is of use to understanding the source of this item, and how to reproduce it for updatesTitle: NM Food Retailers, 2022 - Microsoft Excel VersionItem Type: Microsoft ExcelSummary: Food Retailers by type (mobile, restaurant, etc.), as a Microsoft Excel fileNotes: Prepared by: Link uploaded by EMcRae_NMCDCSource: NM Environment Dept. - sent directlyFeature Service: https://nmcdc.maps.arcgis.com/home/item.html?id=fdf6b9eeb01d4cd8bbc32d5b7da16f62UID: 7, 8, 38, 70Data Requested: Food trucks, Local cottage industry (commercial kitchens, etc), Food retailers, Grocery Stores - location, size, typeMethod of Acquisition: Contact made with NM Environment Dept. Date Acquired: May of 2022Priority rank as Identified in 2022 (scale of 1 being the highest priority, to 11 being the lowest priority): 9, 7, 11, 6Tags: PENDING_ Title New Mexico Food Retailers 2022 - NMFoodRetailers2022
Summary List of licensed food retailers with categories as of April 2022
Notes
Source New Mexico Environment Department
Prepared by EMcRae_NMCDC
Feature Service https://nmcdc.maps.arcgis.com/home/item.html?id=69d62107fa3d49a18acb87a8a584ca03
Alias Definition
Name Name
License License Number
Status Status
Street1 Street 1
Street2 Street 2
City City
State State
Zip Zip
Retail Food Establishment (Retail)
Mobile Mobile Food Establisment
MobType MobileType
MobSup Mobile Support Unit
ServArea Servicing Area (Commissary)
FullServ Full Service Restaurant
Restrnt Restaurant
Deli Deli
Seafood Seafood Market
Meat Meat Market
ConvStore Convenience Store
Daycare Day Care
SchFood School Food Program
Bar Bar
Coffee Coffee Shop
Catering Catering Operation
Concess Concession Stand/Snack Bar
Snack Institution
Bakery Bakery
Grocery Market (Grocery)
Other Other
Lat Latitude
Long Longitude
AccScore Accuracy Score
AccType Accuracy Type
Number Number
Street Street
UnitType Unit Type
UnitNum Unit Number
GCCity City
GCState State
GCCounty County
GCZip Zip
GCCountry Country
GCSource Source
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This project focuses on data mapping, integration, and analysis to support the development and enhancement of six UNCDF operational applications: OrgTraveler, Comms Central, Internal Support Hub, Partnership 360, SmartHR, and TimeTrack. These apps streamline workflows for travel claims, internal support, partnership management, and time tracking within UNCDF.Key Features and Tools:Data Mapping for Salesforce CRM Migration: Structured and mapped data flows to ensure compatibility and seamless migration to Salesforce CRM.Python for Data Cleaning and Transformation: Utilized pandas, numpy, and APIs to clean, preprocess, and transform raw datasets into standardized formats.Power BI Dashboards: Designed interactive dashboards to visualize workflows and monitor performance metrics for decision-making.Collaboration Across Platforms: Integrated Google Collab for code collaboration and Microsoft Excel for data validation and analysis.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains four MATLAB scripts designed for the processing, analysis, and visualisation of acceleration data obtained from Smart Rock sensors. These scripts facilitate importing raw data from Excel files, processing it to extract meaningful insights such as frequency spectra, signal peaks, and orientation information. Below is a brief overview of each script:
Retreive_raw_data.m: The main script responsible for importing raw acceleration and quaternion data from user-selected Excel files. It initiates the data processing pipeline by calling functions to import, visualise, and analyse the data. The script plots the acceleration data along the X, Y, and Z axes and manages quaternion data for further processing, such as conversion to rotation matrices.
importfile.m: A supporting function specifically designed to import acceleration data from the specified Excel worksheet. It extracts time series data along with acceleration values on three axes (X, Y, Z) and prepares the data for visualisation and analysis in the main script.
frequenzspektrum.m: This function calculates the frequency spectrum of a given signal using Fast Fourier Transform (FFT). It returns the amplitude and phase spectra, enabling frequency-domain analysis of acceleration signals. This script is often called during the analysis phase for detailed signal processing.
Composite acceleration and signal smooth.m: This script processes the imported acceleration data by resampling it to equal time intervals, applying low-pass and high-pass filters, detecting peaks in the signal, and performing Fourier Transform to analyse the frequency spectrum. It provides a more detailed analysis of the composite acceleration derived from the X, Y, and Z components.
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Vrinda Store: Interactive Ms Excel dashboardVrinda Store: Interactive Ms Excel dashboard Feb 2024 - Mar 2024Feb 2024 - Mar 2024 The owner of Vrinda store wants to create an annual sales report for 2022. So that their employees can understand their customers and grow more sales further. Questions asked by Owner of Vrinda store are as follows:- 1) Compare the sales and orders using single chart. 2) Which month got the highest sales and orders? 3) Who purchased more - women per men in 2022? 4) What are different order status in 2022?
And some other questions related to business. The owner of Vrinda store wanted a visual story of their data. Which can depict all the real time progress and sales insight of the store. This project is a Ms Excel dashboard which presents an interactive visual story to help the Owner and employees in increasing their sales. Task performed : Data cleaning, Data processing, Data analysis, Data visualization, Report. Tool used : Ms Excel The owner of Vrinda store wants to create an annual sales report for 2022. So that their employees can understand their customers and grow more sales further. Questions asked by Owner of Vrinda store are as follows:- 1) Compare the sales and orders using single chart. 2) Which month got the highest sales and orders? 3) Who purchased more - women per men in 2022? 4) What are different order status in 2022? And some other questions related to business. The owner of Vrinda store wanted a visual story of their data. Which can depict all the real time progress and sales insight of the store. This project is a Ms Excel dashboard which presents an interactive visual story to help the Owner and employees in increasing their sales. Task performed : Data cleaning, Data processing, Data analysis, Data visualization, Report. Tool used : Ms Excel Skills: Data Analysis · Data Analytics · ms excel · Pivot Tables
This is a full set of experimental data derived from biodegradation experiments that were conducted with COREXIT 9500 dispersant in seawater, with the objectives of defining the rates and transformation products of degradation for each set of surfactants in the commercial COREXIT 9500 series. In addition, the data set contains detailed mass spectral characterization results for the surfactants in COREXIT 9500 dispersants. Data files include raw mass spectral files (in mzML format), peak lists processed through mass spectral data analysis software (in Excel format), custom data processing routines (written in the R language), and figures generated from the raw data.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This folder contains the following files and datasets:Flow Cytometry DataIndividual FCS files - Raw data files obtained following segmentationAnalysis file (pre-transformation) - Data analysis file before transformation, compatible with FCS ExpressAnalysis file (post-transformation) - Data analysis file after transformation, compatible with FCS ExpressDNS format files - Processed files analyzed following data transformationStatistical Analysis and FiguresManuscript figures - All figures from the manuscript in GraphPad Prism format, accessible with Numbers, including statistical test resultsData Extraction and Spatial AnalysisCluster percentages - Excel file containing individual cluster percentages extracted from the analysis fileSpatial neighborhood data - Excel file with all data used as starting point for spatial neighborhood map generationSpatial interaction maps - ZIP archive containing heatmaps showing spatial interactions between individual clustersPlease see the collection for related records https://doi.org/10.25405/data.ncl.c.7890872
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Ziemann’s supplementary file. Tab-separated, plain text version of the Ziemann et al. [2] supplementary file. (TSV 148 kb)
https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
The global big data technology market size was valued at approximately $162 billion in 2023 and is projected to reach around $471 billion by 2032, growing at a Compound Annual Growth Rate (CAGR) of 12.6% during the forecast period. The growth of this market is primarily driven by the increasing demand for data analytics and insights to enhance business operations, coupled with advancements in AI and machine learning technologies.
One of the principal growth factors of the big data technology market is the rapid digital transformation across various industries. Businesses are increasingly recognizing the value of data-driven decision-making processes, leading to the widespread adoption of big data analytics. Additionally, the proliferation of smart devices and the Internet of Things (IoT) has led to an exponential increase in data generation, necessitating robust big data solutions to analyze and extract meaningful insights. Organizations are leveraging big data to streamline operations, improve customer engagement, and gain a competitive edge.
Another significant growth driver is the advent of advanced technologies like artificial intelligence (AI) and machine learning (ML). These technologies are being integrated into big data platforms to enhance predictive analytics and real-time decision-making capabilities. AI and ML algorithms excel at identifying patterns within large datasets, which can be invaluable for predictive maintenance in manufacturing, fraud detection in banking, and personalized marketing in retail. The combination of big data with AI and ML is enabling organizations to unlock new revenue streams, optimize resource utilization, and improve operational efficiency.
Moreover, regulatory requirements and data privacy concerns are pushing organizations to adopt big data technologies. Governments worldwide are implementing stringent data protection regulations, like the General Data Protection Regulation (GDPR) in Europe and the California Consumer Privacy Act (CCPA) in the United States. These regulations necessitate robust data management and analytics solutions to ensure compliance and avoid hefty fines. As a result, organizations are investing heavily in big data platforms that offer secure and compliant data handling capabilities.
As organizations continue to navigate the complexities of data management, the role of Big Data Professional Services becomes increasingly critical. These services offer specialized expertise in implementing and managing big data solutions, ensuring that businesses can effectively harness the power of their data. Professional services encompass a range of offerings, including consulting, system integration, and managed services, tailored to meet the unique needs of each organization. By leveraging the knowledge and experience of big data professionals, companies can optimize their data strategies, streamline operations, and achieve their business objectives more efficiently. The demand for these services is driven by the growing complexity of big data ecosystems and the need for seamless integration with existing IT infrastructure.
Regionally, North America holds a dominant position in the big data technology market, primarily due to the early adoption of advanced technologies and the presence of key market players. The Asia Pacific region is expected to witness the highest growth rate during the forecast period, driven by increasing digitalization, the rapid growth of industries such as e-commerce and telecommunications, and supportive government initiatives aimed at fostering technological innovation.
The big data technology market is segmented into software, hardware, and services. The software segment encompasses data management software, analytics software, and data visualization tools, among others. This segment is expected to witness substantial growth due to the increasing demand for data analytics solutions that can handle vast amounts of data. Advanced analytics software, in particular, is gaining traction as organizations seek to gain deeper insights and make data-driven decisions. Companies are increasingly adopting sophisticated data visualization tools to present complex data in an easily understandable format, thereby enhancing decision-making processes.
Hurricane Sandy, the largest storm of historical record in the Atlantic basin, severely impacted southern Long Island, New York in October 2012. In 2014, the U.S. Geological Survey (USGS), in cooperation with the U.S. Army Corps of Engineers (USACE), conducted a high-resolution multibeam echosounder survey with Alpine Ocean Seismic Survey, Inc., offshore of Fire Island and western Long Island, New York to document the post-storm conditions of the inner continental shelf. The objectives of the survey were to determine the impact of Hurricane Sandy on the inner continental shelf morphology and modern sediment distribution, and provide additional geospatial data for sediment transport studies and coastal change model development. For more information about the WHCMSC Field Activity, see https://cmgds.marine.usgs.gov/fan_info.php?fan=2014-072-FA.
This workflow aims to efficiently integrate floral sample data from Excel files into a MongoDB database for botanical projects. It involves verifying and updating taxonomic information, importing georeferenced floral samples, converting data to JSON format, and uploading it to the database. This process ensures accurate taxonomy and enriches the database with comprehensive sample information, supporting robust data analysis and enhancing the project's overall dataset. Background Efficient management of flora sample data is essential in botanical projects, especially when integrating diverse information into a MongoDB database. This workflow addresses the challenge of incorporating floral samples, collected at various sampling points, into the MongoDB database. The database is divided into two segments: one storing taxonomic information and common characteristics of taxa, and the other containing georeferenced floral samples with relevant information. The workflow ensures that, upon importing new samples, taxonomic information is verified and updated, if necessary, before storing the sample data. Introduction In botanical projects, effective data handling is pivotal, particularly when incorporating diverse flora samples into a MongoDB database. This workflow focuses on importing floral samples from an Excel file into MongoDB, ensuring data integrity and taxonomic accuracy. The database is structured into taxonomic information and a collection of georeferenced floral samples, each with essential details about the collection location and the species' nativity. The workflow dynamically updates taxonomic records and stores new samples in the appropriate database sections, enriching the overall floral sample collection. Aims The primary aim of this workflow is to streamline the integration of floral sample data into the MongoDB database, maintaining taxonomic accuracy and enhancing the overall collection. The workflow includes the following key components: - Taxonomy Verification and Update: Checks and updates taxonomic information in the MongoDB database, ensuring accuracy before importing new floral samples. - Georeferenced Sample Import: Imports floral samples from the Excel file, containing georeferenced information and additional sample details. - JSON Transformation and Database Upload: Transforms the floral sample information from the Excel file into JSON format and uploads it to the appropriate sections of the MongoDB database. Scientific Questions - Taxonomy Verification Process: How effectively does the workflow verify and update taxonomic information before importing new floral samples? - Georeferenced Sample Storage: How does the workflow handle the storage of georeferenced floral samples, considering collection location and species nativity? - JSON Transformation Accuracy: How successful is the transformation of floral sample information from the Excel file into JSON format for MongoDB integration? - Database Enrichment: How does the workflow contribute to enriching the taxonomic and sample collections in the MongoDB database, and how is this reflected in the overall project dataset?
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is a collection of historic data of wind turbine installations in the whole of Denmark from the Danish Energy Agency (Energistyrelsen) used in a flexibility study by Karen Olsen in 2018 (paper has been submitted to a PowerTech conference).The data has been extracted from the website of Energistyrelsen at the following link where historic data is publicly available (called "Stamdataregister"):http://ens.dk/service/statistik-data-noegletal-og-kort/data-oversigt-over-energisektoren The present version was extracted in January 2019 and contains installation and production data until and including December 2018. The data is in the originally downloaded excel file, ready to be parsed by the python code written by Karen Olsen.Data used for analysis:- turbine ID number (column: "Turbine identifier (GSRN)" in the excel spreadsheet)- date of installation (column: "Date of original connection to grid" in the excel spreadsheet)- turbine capacity (column: "Capacity (kW)" in the excel spreadsheet)- turbine location commune (column: "Local authority no" in the excel spreadsheet)- turbine placing sea/land (column: "Type of location" in the excel spreadsheet)- yearly production (columns starting at: "Historic production figures (kWh):" in the excel spreadsheet)Further information and code for analysis can be found under:http://kpolsen.github.io/windpype_dev/
Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
License information was derived automatically
Excel files contain measurements from a dedicated swimming pool test facility for the study of a selective transmission cover. The Excel files contain the data for the complete period between the 15/6/28 and 3/07/18. The metadata is provided is the first worksheet. The Origin file contains the details of a chemical trial for a comparison of a transparent and selective transmission cover. Details of the study can be found in the notes included with the Origin project file.
https://www.verifiedmarketresearch.com/privacy-policy/https://www.verifiedmarketresearch.com/privacy-policy/
NoSQL Database Market size was valued at USD 7.43 Billion in 2024 and is projected to reach USD 60 Billion by 2031, growing at a CAGR of 30% during the forecast period from 2024 to 2031.
Global NoSQL Database Market Drivers
Big Data Management: The exponential growth of unstructured and semi-structured data necessitates flexible and scalable database solutions. Cloud Computing Adoption: The shift towards cloud-based applications and infrastructure is driving demand for NoSQL databases. Real-time Analytics: NoSQL databases excel at handling real-time data processing and analytics, making them suitable for applications like IoT and fraud detection.
Global NoSQL Database Market Restraints
Complexity and Management Challenges: NoSQL databases can be complex to manage and require specialized skills. Lack of Standardization: The absence of a standardized NoSQL query language can hinder data integration and migration.
The main objectives of the survey were: - To obtain weights for the revision of the Consumer Price Index (CPI) for Funafuti; - To provide information on the nature and distribution of household income, expenditure and food consumption patterns; - To provide data on the household sector's contribution to the National Accounts - To provide information on economic activity of men and women to study gender issues - To undertake some poverty analysis
National, including Funafuti and Outer islands
All the private household are included in the sampling frame. In each household selected, the current resident are surveyed, and people who are usual resident but are currently away (work, health, holydays reasons, or border student for example. If the household had been residing in Tuvalu for less than one year: - but intend to reside more than 12 months => The household is included - do not intend to reside more than 12 months => out of scope
Sample survey data [ssd]
It was decided that 33% (one third) sample was sufficient to achieve suitable levels of accuracy for key estimates in the survey. So the sample selection was spread proportionally across all the island except Niulakita as it was considered too small. For selection purposes, each island was treated as a separate stratum and independent samples were selected from each. The strategy used was to list each dwelling on the island by their geographical position and run a systematic skip through the list to achieve the 33% sample. This approach assured that the sample would be spread out across each island as much as possible and thus more representative.
For details please refer to Table 1.1 of the Report.
Only the island of Niulakita was not included in the sampling frame, considered too small.
Face-to-face [f2f]
There were three main survey forms used to collect data for the survey. Each question are writen in English and translated in Tuvaluan on the same version of the questionnaire. The questionnaires were designed based on the 2004 survey questionnaire.
HOUSEHOLD FORM - composition of the household and demographic profile of each members - dwelling information - dwelling expenditure - transport expenditure - education expenditure - health expenditure - land and property expenditure - household furnishing - home appliances - cultural and social payments - holydays/travel costs - Loans and saving - clothing - other major expenditure items
INDIVIDUAL FORM - health and education - labor force (individu aged 15 and above) - employment activity and income (individu aged 15 and above): wages and salaries, working own business, agriculture and livestock, fishing, income from handicraft, income from gambling, small scale activies, jobs in the last 12 months, other income, childreen income, tobacco and alcohol use, other activities, and seafarer
DIARY (one diary per week, on a 2 weeks period, 2 diaries per household were required) - All kind of expenses - Home production - food and drink (eaten by the household, given away, sold) - Goods taken from own business (consumed, given away) - Monetary gift (given away, received, winning from gambling) - Non monetary gift (given away, received, winning from gambling)
Questionnaire Design Flaws Questionnaire design flaws address any problems with the way questions were worded which will result in an incorrect answer provided by the respondent. Despite every effort to minimize this problem during the design of the respective survey questionnaires and the diaries, problems were still identified during the analysis of the data. Some examples are provided below:
Gifts, Remittances & Donations Collecting information on the following: - the receipt and provision of gifts - the receipt and provision of remittances - the provision of donations to the church, other communities and family occasions is a very difficult task in a HIES. The extent of these activities in Tuvalu is very high, so every effort should be made to address these activities as best as possible. A key problem lies in identifying the best form (questionnaire or diary) for covering such activities. A general rule of thumb for a HIES is that if the activity occurs on a regular basis, and involves the exchange of small monetary amounts or in-kind gifts, the diary is more appropriate. On the other hand, if the activity is less infrequent, and involves larger sums of money, the questionnaire with a recall approach is preferred. It is not always easy to distinguish between the two for the different activities, and as such, both the diary and questionnaire were used to collect this information. Unfortunately it probably wasn?t made clear enough as to what types of transactions were being collected from the different sources, and as such some transactions might have been missed, and others counted twice. The effects of these problems are hopefully minimal overall.
Defining Remittances Because people have different interpretations of what constitutes remittances, the questionnaire needs to be very clear as to how this concept is defined in the survey. Unfortunately this wasn?t explained clearly enough so it was difficult to distinguish between a remittance, which should be of a more regular nature, and a one-off monetary gift which was transferred between two households.
Business Expenses Still Recorded The aim of the survey is to measure "household" expenditure, and as such, any expenditure made by a household for an item or service which was primarily used for a business activity should be excluded. It was not always clear in the questionnaire that this was the case, and as such some business expenses were included. Efforts were made during data cleaning to remove any such business expenses which would impact significantly on survey results.
Purchased goods given away as a gift When a household makes a gift donation of an item it has purchased, this is recorded in section 5 of the diary. Unfortunately it was difficult to know how to treat these items as it was not clear as to whether this item had been recorded already in section 1 of the diary which covers purchases. The decision was made to exclude all information of gifts given which were considered to be purchases, as these items were assumed to have already been recorded already in section 1. Ideally these items should be treated as a purchased gift given away, which in turn is not household consumption expenditure, but this was not possible.
Some key items missed in the Questionnaire Although not a big issue, some key expenditure items were omitted from the questionnaire when it would have been best to collect them via this schedule. A key example being electric fans which many households in Tuvalu own.
Consistency of the data: - each questionnaire was checked by the supervisor during and after the collection - before data entry, all the questionnaire were coded - the CSPRo data entry system included inconsistency checks which allow the NSO staff to point some errors and to correct them with imputation estimation from their own knowledge (no time for double entry), 4 data entry operators. - after data entry, outliers were identified in order to check their consistency.
All data entry, including editing, edit checks and queries, was done using CSPro (Census Survey Processing System) with additional data editing and cleaning taking place in Excel.
The staff from the CSD was responsible for undertaking the coding and data entry, with assistance from an additional four temporary staff to help produce results in a more timely manner.
Although enumeration didn't get completed until mid June, the coding and data entry commenced as soon as forms where available from Funafuti, which was towards the end of March. The coding and data entry was then completed around the middle of July.
A visit from an SPC consultant then took place to undertake initial cleaning of the data, primarily addressing missing data items and missing schedules. Once the initial data cleaning was undertaken in CSPro, data was transferred to Excel where it was closely scrutinized to check that all responses were sensible. In the cases where unusual values were identified, original forms were consulted for these households and modifications made to the data if required.
Despite the best efforts being made to clean the data file in preparation for the analysis, no doubt errors will still exist in the data, due to its size and complexity. Having said this, they are not expected to have significant impacts on the survey results.
Under-Reporting and Incorrect Reporting as a result of Poor Field Work Procedures The most crucial stage of any survey activity, whether it be a population census or a survey such as a HIES is the fieldwork. It is crucial for intense checking to take place in the field before survey forms are returned to the office for data processing. Unfortunately, it became evident during the cleaning of the data that fieldwork wasn?t checked as thoroughly as required, and as such some unexpected values appeared in the questionnaires, as well as unusual results appearing in the diaries. Efforts were made to indentify the main issues which would have the greatest impact on final results, and this information was modified using local knowledge, to a more reasonable answer, when required.
Data Entry Errors Data entry errors are always expected, but can be kept to a minimum with
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Excel sheet which has all the raw data. It has the raw data from Google forms. Raw data after it was cleaned, raw data after it was cleaned and coded, as well as raw data after outliers were removed. It also has the coding system used.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Example of how I use MS Excel's VLOOKUP() function to filter my data.