Facebook
TwitterThis dataset was created by Pinky Verma
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Sample data for exercises in Further Adventures in Data Cleaning.
Facebook
Twitterhttp://researchdatafinder.qut.edu.au/display/n14350http://researchdatafinder.qut.edu.au/display/n14350
This spreadsheet provides a platform for adsorption isotherm data to be plotted with accompanying statistical assessment. The method and interpretation of this spreadsheet will be disclosed in an... QUT Research Data Respository Dataset Resource available for download
Facebook
Twitterhttp://www.gnu.org/licenses/lgpl-3.0.htmlhttp://www.gnu.org/licenses/lgpl-3.0.html
On the official website the dataset is available over SQL server (localhost) and CSVs to be used via Power BI Desktop running on Virtual Lab (Virtaul Machine). As per first two steps of Importing data are executed in the virtual lab and then resultant Power BI tables are copied in CSVs. Added records till year 2022 as required.
this dataset will be helpful in case you want to work offline with Adventure Works data in Power BI desktop in order to carry lab instructions as per training material on official website. The dataset is useful in case you want to work on Power BI desktop Sales Analysis example from Microsoft website PL 300 learning.
Download the CSV file(s) and import in Power BI desktop as tables. The CSVs are named as tables created after first two steps of importing data as mentioned in the PL-300 Microsoft Power BI Data Analyst exam lab.
Facebook
Twitterhttps://cdla.io/sharing-1-0/https://cdla.io/sharing-1-0/
The Superstore Sales Data dataset, available in an Excel format as "Superstore.xlsx," is a comprehensive collection of sales and customer-related information from a retail superstore. This dataset comprises* three distinct tables*, each providing specific insights into the store's operations and customer interactions.
Facebook
Twitterhttps://digital.nhs.uk/about-nhs-digital/terms-and-conditionshttps://digital.nhs.uk/about-nhs-digital/terms-and-conditions
Warning: Large file size (over 1GB). Each monthly data set is large (over 4 million rows), but can be viewed in standard software such as Microsoft WordPad (save by right-clicking on the file name and selecting 'Save Target As', or equivalent on Mac OSX). It is then possible to select the required rows of data and copy and paste the information into another software application, such as a spreadsheet. Alternatively, add-ons to existing software, such as the Microsoft PowerPivot add-on for Excel, to handle larger data sets, can be used. The Microsoft PowerPivot add-on for Excel is available from Microsoft http://office.microsoft.com/en-gb/excel/download-power-pivot-HA101959985.aspx Once PowerPivot has been installed, to load the large files, please follow the instructions below. Note that it may take at least 20 to 30 minutes to load one monthly file. 1. Start Excel as normal 2. Click on the PowerPivot tab 3. Click on the PowerPivot Window icon (top left) 4. In the PowerPivot Window, click on the "From Other Sources" icon 5. In the Table Import Wizard e.g. scroll to the bottom and select Text File 6. Browse to the file you want to open and choose the file extension you require e.g. CSV Once the data has been imported you can view it in a spreadsheet. What does the data cover? General practice prescribing data is a list of all medicines, dressings and appliances that are prescribed and dispensed each month. A record will only be produced when this has occurred and there is no record for a zero total. For each practice in England, the following information is presented at presentation level for each medicine, dressing and appliance, (by presentation name): - the total number of items prescribed and dispensed - the total net ingredient cost - the total actual cost - the total quantity The data covers NHS prescriptions written in England and dispensed in the community in the UK. Prescriptions written in England but dispensed outside England are included. The data includes prescriptions written by GPs and other non-medical prescribers (such as nurses and pharmacists) who are attached to GP practices. GP practices are identified only by their national code, so an additional data file - linked to the first by the practice code - provides further detail in relation to the practice. Presentations are identified only by their BNF code, so an additional data file - linked to the first by the BNF code - provides the chemical name for that presentation.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Data was imported from the BAK file found here into SQL Server, and then individual tables were exported as CSV. Jupyter Notebook containing the code used to clean the data can be found here
Version 6 has a some more cleaning and structuring that was noticed after importing in Power BI. Changes were made by adding code in python notebook to export new cleaned dataset, such as adding MonthNumber for sorting by month number, similar for WeekDayNumber.
Cleaning was done in python while also using SQL Server to quickly find things. Headers were added separately, ensuring no data loss.Data was cleaned for NaN, garbage values and other columns.
Facebook
TwitterGeneral information This item containst data sets for Schlegel et al, Nano Letters, 2023. DOI: https://doi.org/10.1021/acs.nanolett.2c04884 It contains confocal images, lattice light sheet images, flow cytometry data, compiled data as excle sheet and raw figure files. Abstract Speed is key during infectious disease outbreaks. It is essential, for example, to identify critical host binding factors to pathogens as fast as possible. The complexity of host plasma membrane is often a limiting factor hindering fast and accurate determination of host binding factors as well as high-throughput screening for neutralizing antimicrobial drug targets. Here, we describe a multiparametric and high-throughput platform tackling this bottleneck and enabling fast screens for host binding factors as well as new antiviral drug targets. The sensitivity and robustness of our platform were validated by blocking SARS-CoV-2 particles with nanobodies and IgGs from human serum samples. Data usage Researchers are welcome to use the data contained in the dataset for any projects. Please cite this item upon use or when published. We encourage reuse using the same CC BY 4.0 License. Data Content Excel files for graphs Microscopy Images Flow cytometry data Software to open files: .csv: Fiji (https://imagej.net/software/fiji/downloads) or Microsoft Excel .xlsx: Microsoft Excel .tif, .lsm: Fiji (https://imagej.net/software/fiji/downloads) .pzfx: GraphPad Prism .svg: Inkscape (https://inkscape.org/) .fcs: FCS Express .pdf: AdobeAcrobat or Mozilla Firefox .ijm: Fiji (https://imagej.net/software/fiji/downloads)
Facebook
TwitterThe following datafiles contain detailed information about vehicles in the UK, which would be too large to use as structured tables. They are provided as simple CSV text files that should be easier to use digitally.
Data tables containing aggregated information about vehicles in the UK are also available.
We welcome any feedback on the structure of our new datafiles, their usability, or any suggestions for improvements, please contact vehicles statistics.
CSV files can be used either as a spreadsheet (using Microsoft Excel or similar spreadsheet packages) or digitally using software packages and languages (for example, R or Python).
When using as a spreadsheet, there will be no formatting, but the file can still be explored like our publication tables. Due to their size, older software might not be able to open the entire file.
df_VEH0120_GB: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1077520/df_VEH0120_GB.csv">Vehicles at the end of the quarter by licence status, body type, make, generic model and model: Great Britain (CSV, 37.6 MB)
Scope: All registered vehicles in Great Britain; from 1994 Quarter 4 (end December)
Schema: BodyType, Make, GenModel, Model, LicenceStatus, [number of vehicles; one column per quarter]
df_VEH0120_UK: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1077521/df_VEH0120_UK.csv">Vehicles at the end of the quarter by licence status, body type, make, generic model and model: United Kingdom (CSV, 20.8 MB)
Scope: All registered vehicles in the United Kingdom; from 2014 Quarter 3 (end September)
Schema: BodyType, Make, GenModel, Model, LicenceStatus, [number of vehicles; one column per quarter]
df_VEH0160_GB: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1077522/df_VEH0160_GB.csv">Vehicles registered for the first time by body type, make, generic model and model: Great Britain (CSV, 17.1 MB)
Scope: All vehicles registered for the first time in Great Britain; from 2001 Quarter 1 (January to March)
Schema: BodyType, Make, GenModel, Model, [number of vehicles; one column per quarter]
df_VEH0160_UK: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1077523/df_VEH0160_UK.csv">Vehicles registered for the first time by body type, make, generic model and model: United Kingdom (CSV, 4.93 MB)
Scope: All vehicles registered for the first time in the United Kingdom; from 2014 Quarter 3 (July to September)
Schema: BodyType, Make, GenModel, Model, [number of vehicles; one column per quarter]
df_VEH0124: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1077524/df_VEH0124.csv">Vehicles at the end of the quarter by licence status, body type, make, generic model, model, year of first use and year of manufacture: United Kingdom (CSV, 28.2 MB)
Scope: All licensed vehicles in the United Kingdom; 2021 Quarter 4 (end December) only
Schema: BodyType, Make, GenModel, Model, YearFirstUsed, YearManufacture, Licensed (number of vehicles), SORN (number of vehicles)
df_VEH0220: <a class="govu
Facebook
TwitterDownload Employee Vehicle Personal Use Excel SheetThis dataset lists the employee name and taxable benefit for personal use of City of Greater Sudbury Vehicle as travel expenses for the year 2020. Expenses are broken down in separate tabs by Quarter (Q1, Q2, Q3 and Q4). Data for other years is available in separate datasets. Updated quarterly when expenses are prepared.
Facebook
TwitterHello Everyone, I made this Finance Dashboard in Power BI with the Finance Excel Workbook provided by Microsoft on their Website. Problem Statement The goal of this Power BI Dashboard is to analyze the financial performance of a company using the provided Microsoft Sample Data. To create a visually appealing dashboard that provides an overview of the company's financial metrics enabling stakeholders to make informed business decisions. Sections in the Report Report has multiple section's from where you can manage the data, like : • Report data can be sliced by Segments, Country and Year to show particular data. - Report Contain Two Navigation Page one is overview and other is sales dashboard page for better visualisation of data. - Report Contain all the important data. - Report Contain different chart and bar garph for different section .
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F23794893%2Fad300fb12ce26b77a2fb05cfee9c7892%2Ffinance%20report_page-0001.jpg?generation=1732438234032066&alt=media" alt="">
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F23794893%2F005ab4278cdd159a81c7935aa21b9aa9%2Ffinance%20report_page-0002.jpg?generation=1732438324842803&alt=media" alt="">
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Materials and Methods The study was held in the Oral and Maxillofacial Surgery department and Kasturba Hospital, Manipal, from November 2019 to October 2021 after approval from the Institutional Ethics Committee (IEC: 924/2019). The study included patients between 18-70 years. Patients with associated diseases like cysts or tumors of the jaw bones, pregnant women, and those with underlying psychological issues were excluded from the study. The patients were assessed 8-12 weeks after surgical intervention. A data schedule was prepared to document age, sex, and fracture type. The study consisted of 182 subjects divided into two groups of 91 each (Group A: Mild to moderate facial injury and Group B: Severe facial injury) based on the severity of maxillofacial fractures and facial injury. Informed consent was obtained from each of the study participants. We followed Facial Injury Severity Scale (FISS) to determine the severity of facial fractures and injuries. The face is divided horizontally into the mandibular, mid-facial, and upper facial thirds. Fractures in these thirds are given points based on their type (Table 1). Injuries with a total score above 4.4 were considered severe facial injuries (Group A), and those with a total score below 4.4 were considered mild/ moderate facial injuries (Group B). The QOL was compared between the two groups. Meticulous management of hard and soft tissue injuries in our state-of-the-art tertiary care hospital was implemented. All elective cases were surgically treated at least 72 hours after the initial trauma. The facial fractures were adequately reduced and fixed with high–end Titanium miniplates and screws (AO Principles of Fracture Management). Soft tissue injuries were managed by wound debridement, removal of foreign bodies, and layered wound closure. Adequate pain-relieving medication was prescribed to the patients postoperatively for effective pain control. The QOL of the subjects was assessed using the 'Twenty-point Quality of life assessment in facial trauma patients in Indian population' assessment tool. This tool contains 20 questions and uses a five-point Likert response scale. The Twenty – point quality of life assessment tool included two zones: Zone 1 (Psychosocial impact) and Zone 2 (Functional and esthetic impact), with ten questions (domains) each (Table 2). The scores for each question ranged from 1- 5, the higher score denoting better Quality of life. Accordingly, the score in each zone for a patient ranged from 10 -50, and the total scores of both zones were recorded to determine the QOL. The sum of both zones determined the prognosis following surgery (Table 2). The data collected was entered into a Microsoft Excel spreadsheet and analyzed using IBM SPSS Statistics, Version 22(Armonk, NY: IBM Corp). Descriptive data were presented in the form of frequency and percentage for categorical variables and in the form of mean, median, standard deviation, and quartiles for continuous variables. Since the data were not following normal distribution, a non-parametric test was used. QOL scores were compared between the study groups using the Mann-Whitney U test. P value < 0.05 was considered statistically significant.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Summary Trend TablesThe HCUP Summary Trend Tables include information on hospital utilization derived from the HCUP State Inpatient Databases (SID), State Emergency Department Databases (SEDD), National Inpatient Sample (NIS), and Nationwide Emergency Department Sample (NEDS). State statistics are displayed by discharge month and national and regional statistics are displayed by discharge quarter. Information on emergency department (ED) utilization is dependent on availability of HCUP data; not all HCUP Partners participate in the SEDD.The HCUP Summary Trend Tables include downloadable Microsoft® Excel tables with information on the following topics:Overview of trends in inpatient and emergency department utilizationAll inpatient encounter typesInpatient encounter typeNormal newbornsDeliveriesNon-elective inpatient stays, admitted through the EDNon-elective inpatient stays, not admitted through the EDElective inpatient staysInpatient service lineMaternal and neonatal conditionsMental health and substance use disordersInjuriesSurgeriesOther medical conditionsED treat-and-release visitsDescription of the data source, methodology, and clinical criteria (Excel file, 43 KB)Change log (Excel file, 65 KB)For each type of inpatient stay, there is an Excel file for the number of discharges, the percent of discharges, the average length of stay, the in-hospital mortality rate per 100 discharges,1 and the population-based rate per 100,000 population.2 Each Excel file contains State-specific, region-specific, and national statistics. For most files, trends begin in January 2017. Also included in each Excel file is a description of the HCUP databases and methodology.
Facebook
TwitterA free mapping tool that allows you to create a thematic map of London without any specialist GIS skills or software - all you need is Microsoft Excel. Templates are available for London’s Boroughs and Wards. Full instructions are contained within the spreadsheets.
The tool works in any version of Excel. But the user MUST ENABLE MACROS, for the features to work. There a some restrictions on functionality in the ward maps in Excel 2003 and earlier - full instructions are included in the spreadsheet.
To check whether the macros are enabled in Excel 2003 click Tools, Macro, Security and change the setting to Medium. Then you have to re-start Excel for the changes to take effect. When Excel starts up a prompt will ask if you want to enable macros - click yes.
In Excel 2007 and later, it should be set by default to the correct setting, but if it has been changed, click on the Windows Office button in the top corner, then Excel options (at the bottom), Trust Centre, Trust Centre Settings, and make sure it is set to 'Disable all macros with notification'. Then when you open the spreadsheet, a prompt labelled 'Options' will appear at the top for you to enable macros.
To create your own thematic borough maps in Excel using the ward map tool as a starting point, read these instructions. You will need to be a confident Excel user, and have access to your boundaries as a picture file from elsewhere. The mapping tools created here are all fully open access with no passwords.
Copyright notice: If you publish these maps, a copyright notice must be included within the report saying: "Contains Ordnance Survey data © Crown copyright and database rights."
NOTE: Excel 2003 users must 'ungroup' the map for it to work.
Facebook
TwitterThe Adventure Works dataset is a comprehensive and widely used sample database provided by Microsoft for educational and testing purposes. It's designed to represent a fictional company, Adventure Works Cycles, which is a global manufacturer of bicycles and related products. The dataset is often used for learning and practicing various data management, analysis, and reporting skills.
1. Company Overview: - Industry: Bicycle manufacturing - Operations: Global presence with various departments such as sales, production, and human resources.
2. Data Structure: - Tables: The dataset includes a variety of tables, typically organized into categories such as: - Sales: Information about sales orders, products, and customer details. - Production: Data on manufacturing processes, inventory, and product specifications. - Human Resources: Employee details, departments, and job roles. - Purchasing: Vendor information and purchase orders.
3. Sample Tables: - Sales.SalesOrderHeader: Contains information about sales orders, including order dates, customer IDs, and total amounts. - Sales.SalesOrderDetail: Details of individual items within each sales order, such as product ID, quantity, and unit price. - Production.Product: Information about the products being manufactured, including product names, categories, and prices. - Production.ProductCategory: Data on product categories, such as bicycles and accessories. - Person.Person: Contains personal information about employees and contacts, including names and addresses. - Purchasing.Vendor: Information on vendors that supply the company with materials.
4. Usage: - Training and Education: It's widely used for teaching SQL, data analysis, and database management. - Testing and Demonstrations: Useful for testing software features and demonstrating data-related functionalities.
5. Tools: - The dataset is often used with Microsoft SQL Server, but it's also compatible with other relational database systems.
The Adventure Works dataset provides a rich and realistic environment for practicing a range of data-related tasks, from querying and reporting to data modeling and analysis.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
By Huggingface Hub [source]
HelpSteer is an Open-Source dataset designed to empower AI Alignment through the support of fair, team-oriented annotation. The dataset provides 37,120 samples each containing a prompt and response along with five human-annotated attributes ranging between 0 and 4; with higher results indicating better quality. Using cutting-edge methods in machine learning and natural language processing in combination with the annotation of data experts, HelpSteer strives to create a set of standardized values that can be used to measure alignment between human and machine interactions. With comprehensive datasets providing responses rated for correctness, coherence, complexity, helpfulness and verbosity, HelpSteer sets out to assist organizations in fostering reliable AI models which ensure more accurate results thereby leading towards improved user experience at all levels
For more datasets, click here.
- 🚨 Your notebook can be here! 🚨!
How to Use HelpSteer: An Open-Source AI Alignment Dataset
HelpSteer is an open-source dataset designed to help researchers create models with AI Alignment. The dataset consists of 37,120 different samples each containing a prompt, a response and five human-annotated attributes used to measure these responses. This guide will give you a step-by-step introduction on how to leverage HelpSteer for your own projects.
Step 1 - Choosing the Data File
Helpsteer contains two data files – one for training and one for validation. To start exploring the dataset, first select the file you would like to use by downloading both train.csv and validation.csv from the Kaggle page linked above or getting them from the Google Drive repository attached here: [link]. All the samples in each file consist of 7 columns with information about a single response: prompt (given), response (submitted), helpfulness, correctness, coherence, complexity and verbosity; all sporting values between 0 and 4 where higher means better in respective category.
## Step 2—Exploratory Data Analysis (EDA) Once you have your file loaded into your workspace or favorite software environment (e.g suggested libraries like Pandas/Numpy or even Microsoft Excel), it’s time explore it further by running some basic EDA commands that summarize each feature's distribution within our data set as well as note potential trends or points of interests throughout it - e.g what are some traits that are polarizing these responses more? Are there any outliers that might signal something interesting happening? Plotting these results often provides great insights into pattern recognition across datasets which can be used later on during modeling phase also known as “Feature Engineering”
## Step 3—Data Preprocessing After your interpretation of raw data while doing EDA should form some hypotheses around what features matter most when trying to estimate attribute scores of unknown responses accurately so proceeding with preprocessing such as cleaning up missing entries or handling outliers accordingly becomes highly recommended before starting any modelling efforts with this data set - kindly refer also back at Kaggle page description section if unsure about specific attributes domain ranges allowed values explicitly for extra confidence during this step because having correct numerical suggestions ready can make modelling workload lighter later on while building predictive models . It’s important not rushing over this stage otherwise poor results may occur later when aiming high accuracy too quickly upon model deployment due low quality
- Designating and measuring conversational AI engagement goals: Researchers can utilize the HelpSteer dataset to design evaluation metrics for AI engagement systems.
- Identifying conversational trends: By analyzing the annotations and data in HelpSteer, organizations can gain insights into what makes conversations more helpful, cohesive, complex or consistent across datasets or audiences.
- Training Virtual Assistants: Train artificial intelligence algorithms on this dataset to develop virtual assistants that respond effectively to customer queries with helpful answers
If you use this dataset in your research, please credit the original authors. Data Source
**License: [CC0 1.0 Universal (CC0 1.0) - Public Domain Dedication](https://creativecommons.org/pu...
Facebook
TwitterBy City of Chicago [source]
Looking for a dataset that captures the weather conditions at various beaches along Chicago's Lake Michigan lakefront? Look no further than the Beach Weather Stations - Automated Sensors dataset! This dataset contains information on the weather conditions at various beaches along Chicago's Lake Michigan lakefront, collected by sensors maintained by the Chicago Park District.
Generally, these sensors capture the indicated measurements hourly while they are in operation during the summer. However, during other seasons and at some other times, information from the sensors may not be available. Nevertheless, this dataset provides a wealth of information that can be used to study the weather conditions along Chicago's lakefront. So whether you're a casual observer or a Serious scientist, this dataset is sure to be of interest!
To use this dataset, you will need to download the CSV file from the City of Chicago's Open Data Portal. Once you have downloaded the file, you can open it in a spreadsheet application such as Microsoft Excel or Google Sheets.
The dataset contains information on the weather conditions at various beaches along Chicago's Lake Michigan lakefront. The columns in the dataset include the station name, measurement timestamp, air temperature, wet bulb temperature, rain intensity, interval rain, total rain, precipitation type, wind direction, wind speed, maximum wind speed, barometric pressure, solar radiation, heading, battery life and measurement timestamp label.
You can use this dataset to answer questions about the weather conditions at Chicago's beaches. For example, you could find out what the average air temperature was over the course of a summer and compare it to other years. Or you could compare the amount of rain that fell at different beaches during a thunderstorm
- Estimating the amount of rainfall at a beach over a period of time
- Tracking the types of precipitation at a beach
- Determining the wind speed and direction at a beach
License
See the dataset description for more information.
File: beach-weather-stations-automated-sensors-1.csv | Column name | Description | |:--------------------------------|:---------------------------------------------------------| | Station Name | The name of the weather station. (String) | | Measurement Timestamp | The timestamp of the measurement. (DateTime) | | Air Temperature | The air temperature in degrees Fahrenheit. (Float) | | Wet Bulb Temperature | The wet bulb temperature in degrees Fahrenheit. (Float) | | Rain Intensity | The intensity of the rain in inches per hour. (Float) | | Interval Rain | The amount of rain in the interval in inches. (Float) | | Total Rain | The total amount of rain in inches. (Float) | | Precipitation Type | The type of precipitation. (String) | | Wind Direction | The direction of the wind in degrees. (Float) | | Wind Speed | The speed of the wind in miles per hour. (Float) | | Maximum Wind Speed | The maximum speed of the wind in miles per hour. (Float) | | Barometric Pressure | The barometric pressure in inches of mercury. (Float) | | Solar Radiation | The solar radiation in watts per square meter. (Float) | | Heading | The heading in degrees. (Float) | | Battery Life | The battery life in percent. (Float) | | Measurement Timestamp Label | The label for the measurement timestamp. (String) |
If you use this dataset in your research, please credit City of Chicago.
Facebook
TwitterBy Amresh [source]
This All India Saree Retailers Database is a comprehensive collection of up-to-date information on 10,000 Saree Retailers located all over India. The database is updated in April 2021 and offers an overall accuracy rate of around 90%.
For business owners, marketers, and data analysts and researchers, this dataset is an invaluable resource. It contains contact details of store name, contact person names, phone number and email address along with store location information like city state and pin code to help you target the right audience precisely.
The database can be accessed in Microsoft Excel (.xlsx) format which makes it easy to read or manipulate the file according to your needs. Apart from this wide range of payment options like Credit/Debit Card; Online Transfer; NEFT; Cash Deposit; Paytm; PhonePe; Google Pay or PayPal allow quick download access within 2-3 business hours.
So if you are looking for reliable business intelligence data related to Indian saree retailers that can help you unlock incredible opportunities for your business then make sure to download our All India Saree Retailers Database at the earliest!
For more datasets, click here.
- 🚨 Your notebook can be here! 🚨!
This dataset provides a comprehensive list of Saree retailers in India, including store name, contact person, email address, mobile number, phone number, address details like city and state along with pin code. It contains 10 thousand records updated in April 2021 with an overall accuracy rate of around 90%. This data can be used to understand customer behaviour as well as to analyse geographical customer pattern.
Using this dataset you can: - Target specific states or cities where potential customers are located for your Saree business. - Get in touch with local Saree retailers for possible collaborations and partnerships. - Learn more about industry trends from actual store owners who can offer insights into the latest ongoing trends and identify new opportunities for you to grow your business. 4 .Analyse existing competitors’ market share by studying the cities/states where they operate and their contact information such as Mobile Number & Email Ids .
5 .Identify potential new customers for better sales conversion rates by understanding who is already operating in similar products nearby or have similar target audience as yours that help your company reach out to them quickly & effectively using direct marketing techniques such as emails & SMS etc.,
- Creating targeted email campaigns to increase Saree sales: The dataset can be used to create targeted email campaigns that can reach the 10,000 Saree Retailers in India. This will allow businesses to increase sales by directing their message about promotions and discounts directly to potential customers.
- Customizing online product recommendations for each retailer: The dataset can be used to identify the specific products that each individual retailer is interested in selling, so product recommendations on an e-commerce website could be tailored accordingly. This would optimize customer experience giving them more accurate and relevant results when searching for a particular item they are looking for while shopping online.
- Using GPS technology to generate location-based marketing campaigns: By creating geo-fenced areas around each store using the pin code database, it would be possible to send out marketing messages based on people's physical location instead of just sending them out in certain neighborhoods or cities without regard for store locations within those areas. This could help reach specific customers with relevant messages about products or promotions that may interested them more effectively than a standard marketing campaign with no location targeting involved
If you use this dataset in your research, please credit the original authors. Data Source
See the dataset description for more information.
File: 301-Saree-Garment-Retailer-Database-Sample.csv
If you use this dataset in your research, please credit the original authors. If you use this dataset in your research, please credit Amresh.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
TwitterThis dataset was created by Pinky Verma