Facebook
TwitterWith this add in it is possible to create map templates from GIS files in KML format, and create choropleths with them. Providing you have access to KML format map boundary files, it is possible to create your own quick and easy choropleth maps in Excel. The KML format files can be converted from 'shape' files. Many shape files are available to download for free from the web, including from Ordnance Survey and the London Datastore. Standard mapping packages such as QGIS (free to download) and ArcGIS can convert the files to KML format. A sample of a KML file (London wards) can be downloaded from this page, so that users can easily test the tool out. Macros must be enabled for the tool to function. When creating the map using the Excel tool, the 'unique ID' should normally be the area code, the 'Name' should be the area name and then if required and there is additional data in the KML file, further 'data' fields can be added. These columns will appear below and to the right of the map. If not, data can be added later on next to the codes and names. In the add-in version of the tool the final control, 'Scale (% window)' should not normally be changed. With the default value 0.5, the height of the map is set to be half the total size of the user's Excel window. To run a choropleth, select the menu option 'Run Choropleth' to get this form. To specify the colour ramp for the choropleth, the user needs to enter the number of boxes into which the range is to be divided, and the colours for the high and low ends of the range, which is done by selecting coloured option boxes as appropriate. If wished, hit the 'Swap' button to change which colours are for the different ends of the range. Then hit the 'Choropleth' button. The default options for the colours of the ends of the choropleth colour range are saved in the add in, but different values can be selected but setting up a column range of up to twelve cells, anywhere in Excel, filled with the option colours wanted. Then use the 'Colour range' control to select this range, and hit apply, having selected high or low values as wished. The button 'Copy' sets up a sheet 'ColourRamp' in the active workbook with the default colours, which can just be extended or deleted with just a few cells, so saving the user time. The add-in was developed entirely within the Excel VBA IDE by Tim Lund. He is kindly distributing the tool for free on the Datastore but suggests that users who find the tool useful make a donation to the Shelter charity. It is not intended to keep the actively maintained, but if any users or developers would like to add more features, email the author. Acknowledgments Calculation of Excel freeform shapes from latitudes and longitudes is done using calculations from the Ordnance Survey.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Single-Family Portfolio Snapshot consists of a monthly data table and a report generator (Excel pivot table) that can be used to quickly create new reports of interest to the user from the data records. The data records themselves are loan level records using all of the categorical variables highlighted on the report generator table. Users may download and save the Excel file that contains the data records and the pivot table.The report generator sheet consists of an Excel pivot table that gives individual users some ability to analyze monthly trends on dimensions of interest to them. There are six choice dimensions: property state, property county, loan purpose, loan type, property product type, and downpayment source.Each report generator selection variable has an associated drop-down menu that is accessed by clicking once on the associated arrows. Only single selections can be made from each menu. For example, users must choose one state or all states, one county or all counties. If a county is chosen that does not correspond with the selected state, the result will be null values.The data records include each report generator choice variable plus the property zip code, originating mortgagee (lender) number, sponsor-lender name, sponsor number, nonprofit gift provider tax identification number, interest rate, and FHA insurance endorsement year and month. The report generator only provides output for the dollar amount of loans. Users who desire to analyze other data that are available on the data table, for example, interest rates or sponsor number, must first download the Excel file. See the data definitions (PDF in top folder) for details on each data element.Files switch from .zip to excel in August 2017.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Business roles at AgroStar require a baseline of analytical skills, and it is also critical that we are able to explain complex concepts in a simple way to a variety of audiences. This test is structured so that someone with the baseline skills needed to succeed in the role should be able to complete this in under 4 hours without assistance.
Use the data in the included sheet to address the following scenario...
Since its inception, AgroStar has been leveraging an assisted marketplace model. Given that the market potential is huge and that the target customer appreciates a physical store nearby, we have taken a call to explore the offline retail model to drive growth. The primary objective is to get a larger wallet share for AgroStar among existing customers.
Assume you are back in time, in August 2018 and you have been asked to determine the location (taluka) of the first AgroStar offline retail store. 1. What are the key factors you would use to determine the location? Why? 2. What taluka (across three states) would you look open in? Why?
-- (1) Please mention any assumptions you have made and the underlying thought process
-- (2) Please treat the assignment as standalone (it should be self-explanatory to someone who reads it), but we will have a follow-up discussion with you in which we will walk through your approach to this assignment.
-- (3) Mention any data that may be missing that would make this study more meaningful
-- (4) Kindly conduct your analysis within the spreadsheet, we would like to see the working sheet. If you face any issues due to the file size, kindly download this file and share an excel sheet with us
-- (5) If you would like to append a word document/presentation to summarize, please go ahead.
-- (6) In case you use any external data source/article, kindly share the source.
The file CDNOW_master.txt contains the entire purchase history up to the end of June 1998 of the cohort of 23,570 individuals who made their first-ever purchase at CDNOW in the first quarter of 1997. This CDNOW dataset was first used by Fader and Hardie (2001).
Each record in this file, 69,659 in total, comprises four fields: the customer's ID, the date of the transaction, the number of CDs purchased, and the dollar value of the transaction.
CustID = CDNOW_master(:,1); % customer id Date = CDNOW_master(:,2); % transaction date Quant = CDNOW_master(:,3); % number of CDs purchased Spend = CDNOW_master(:,4); % dollar value (excl. S&H)
See "Notes on the CDNOW Master Data Set" (http://brucehardie.com/notes/026/) for details of how the 1/10th systematic sample (http://brucehardie.com/datasets/CDNOW_sample.zip) used in many papers was created.
Reference:
Fader, Peter S. and Bruce G.,S. Hardie, (2001), "Forecasting Repeat Sales at CDNOW: A Case Study," Interfaces, 31 (May-June), Part 2 of 2, S94-S107.
I have merged all three datasets into one file and also did some feature engineering.
Available Data: You will be given anonymized user gameplay data in the form of 3 csv files.
Fields in the data are as described below:
Gameplay_Data.csv contains the following fields:
* Uid: Alphanumeric unique Id assigned to user
* Eventtime: DateTime on which user played the tournament
* Entry_Fee: Entry Fee of tournament
* Win_Loss: ‘W’ if the user won that particular tournament, ‘L’ otherwise
* Winnings: How much money the user won in the tournament (0 for ‘L’)
* Tournament_Type: Type of tournament user played (A / B / C / D)
* Num_Players: Number of players that played in this tournament
Wallet_Balance.csv contains following fields: * Uid: Alphanumeric unique Id assigned to user * Timestamp: DateTime at which user’s wallet balance is given * Wallet_Balance: User’s wallet balance at given time stamp
Demographic.csv contains following fields: * Uid: Alphanumeric unique Id assigned to user * Installed_At: Timestamp at which user installed the app * Connection_Type: User’s internet connection type (Ex: Cellular / Dial Up) * Cpu_Type: Cpu type of device that the user is playing with * Network_Type: Network type in encoded form * Device_Manufacturer: Ex: Realme * ISP: Internet Service Provider. Ex: Airtel * Country * Country_Subdivision * City * Postal_Code * Language: Language that user has selected for gameplay * Device_Name * Device_Type
Build a basic recommendation system which is able to rank/recommend relevant tournaments and entry prices to the user. The main objectives are: 1. A user should not have to scroll too much before selecting a tournament of their preference 2. We would like the user to play as high an entry fee tournament as possible
Facebook
TwitterThe data explorer allows users to create bespoke cross tabs and charts on consumption by property attributes and characteristics, based on the data available from NEED. Two variables can be selected at once (for example property age and property type), with mean, median or number of observations shown in the table. There is also a choice of fuel (electricity or gas). The data spans 2007 to 2019.
Figures provided in the latest version of the tool (June 2021) are based on data used in the June 2021 National Energy Efficiency Data-Framework (NEED) publication. More information on the development of the framework, headline results and data quality are available in the publication. There are also additional detailed tables including distributions of consumption and estimates at local authority level. The data are also available as a comma separated value (csv) file.
We identified 2 processing errors in this edition of the Domestic NEED Annual report and corrected them. The changes are small and do not affect the overall findings of the report, only the domestic energy consumption estimates. The impact of energy efficiency measures analysis remains unchanged. The revisions are summarised on the Domestic NEED Report 2021 release page.
If you have any queries or comments on these outputs please contact: energyefficiency.stats@beis.gov.uk.
XLSM, 2.51MB
<div data-module="toggle" class="accessibility-warning" id="attachment-5443382-accessibility-help">
<p>This file may not be suitable for users of assistive technology.</p>
<details class="gem-c-details govuk-details govuk-!-margin-bottom-3">
Request an accessible format.
If you use assistive technology (such as a screen reader) and need a
version of this document in a more accessible format, please email enquiries@beis.gov.uk. Please tell us what format you need. It will help us if you say what assistive technology you use.
View online <a href="
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Microsatellites, also known as SSRs or STRs, are polymorphic DNA regions with tandem repetitions of a nucleotide motif of size 1–6 base pairs with a broad range of applications in many fields, such as comparative genomics, molecular biology, and forensics. However, the majority of researchers do not have computational training and struggle while running command-line tools or very limited web tools for their SSR research, spending a considerable amount of time learning how to execute the software and conducting the post-processing data tabulation in other tools or manually—time that could be used directly in data analysis. We present EasySSR, a user-friendly web tool with command-line full functionality, designed for practical use in batch identifying and comparing SSRs in sequences, draft, or complete genomes, not requiring previous bioinformatic skills to run. EasySSR requires only a FASTA and an optional GENBANK file of one or more genomes to identify and compare STRs. The tool can automatically analyze and compare SSRs in whole genomes, convert GenBank to PTT files, identify perfect and imperfect SSRs and coding and non-coding regions, compare their frequencies, abundancy, motifs, flanking sequences, and iterations, producing many outputs ready for download such as PTT files, interactive charts, and Excel tables, giving the user the data ready for further analysis in minutes. EasySSR was implemented as a web application, which can be executed from any browser and is available for free at https://computationalbiology.ufpa.br/easyssr/. Tutorials, usage notes, and download links to the source code can be found at https://github.com/engbiopct/EasySSR.
Facebook
TwitterMarket basket analysis with Apriori algorithm
The retailer wants to target customers with suggestions on itemset that a customer is most likely to purchase .I was given dataset contains data of a retailer; the transaction data provides data around all the transactions that have happened over a period of time. Retailer will use result to grove in his industry and provide for customer suggestions on itemset, we be able increase customer engagement and improve customer experience and identify customer behavior. I will solve this problem with use Association Rules type of unsupervised learning technique that checks for the dependency of one data item on another data item.
Association Rule is most used when you are planning to build association in different objects in a set. It works when you are planning to find frequent patterns in a transaction database. It can tell you what items do customers frequently buy together and it allows retailer to identify relationships between the items.
Assume there are 100 customers, 10 of them bought Computer Mouth, 9 bought Mat for Mouse and 8 bought both of them. - bought Computer Mouth => bought Mat for Mouse - support = P(Mouth & Mat) = 8/100 = 0.08 - confidence = support/P(Mat for Mouse) = 0.08/0.09 = 0.89 - lift = confidence/P(Computer Mouth) = 0.89/0.10 = 8.9 This just simple example. In practice, a rule needs the support of several hundred transactions, before it can be considered statistically significant, and datasets often contain thousands or millions of transactions.
Number of Attributes: 7
https://user-images.githubusercontent.com/91852182/145270162-fc53e5a3-4ad1-4d06-b0e0-228aabcf6b70.png">
First, we need to load required libraries. Shortly I describe all libraries.
https://user-images.githubusercontent.com/91852182/145270210-49c8e1aa-9753-431b-a8d5-99601bc76cb5.png">
Next, we need to upload Assignment-1_Data. xlsx to R to read the dataset.Now we can see our data in R.
https://user-images.githubusercontent.com/91852182/145270229-514f0983-3bbb-4cd3-be64-980e92656a02.png">
https://user-images.githubusercontent.com/91852182/145270251-6f6f6472-8817-435c-a995-9bc4bfef10d1.png">
After we will clear our data frame, will remove missing values.
https://user-images.githubusercontent.com/91852182/145270286-05854e1a-2b6c-490e-ab30-9e99e731eacb.png">
To apply Association Rule mining, we need to convert dataframe into transaction data to make all items that are bought together in one invoice will be in ...
Facebook
TwitterHello Everyone, I made this Finance Dashboard in Power BI with the Finance Excel Workbook provided by Microsoft on their Website. Problem Statement The goal of this Power BI Dashboard is to analyze the financial performance of a company using the provided Microsoft Sample Data. To create a visually appealing dashboard that provides an overview of the company's financial metrics enabling stakeholders to make informed business decisions. Sections in the Report Report has multiple section's from where you can manage the data, like : • Report data can be sliced by Segments, Country and Year to show particular data. - Report Contain Two Navigation Page one is overview and other is sales dashboard page for better visualisation of data. - Report Contain all the important data. - Report Contain different chart and bar garph for different section .
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F23794893%2Fad300fb12ce26b77a2fb05cfee9c7892%2Ffinance%20report_page-0001.jpg?generation=1732438234032066&alt=media" alt="">
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F23794893%2F005ab4278cdd159a81c7935aa21b9aa9%2Ffinance%20report_page-0002.jpg?generation=1732438324842803&alt=media" alt="">
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Contents Example:
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F19309871%2F57285787af5c9118ef8e31711d61a38e%2F1.PNG?generation=1709381867466240&alt=media" alt="">
- Timeframe: Data ranging from 2020 to 2024.
- Symbols: 857 symbols, each with "-USDT" as the quote currency.
- Data Collection Date: Downloaded on March 2, 2024.
Purpose and Content This data was collected by the KuCoin API and contains cryptocurrency price data.
Structured Organization: Data within the data folder are meticulously organized by symbols (e.g., BTC-USDT, ETH-USDT) and delineated into specific time intervals (e.g., 5 minutes, 1 hour, 1 day), facilitating efficient data management and accessibility.
Hierarchical Structure:
Symbol Folders: Individual folders for each cryptocurrency pair provide segregated storage, simplifying data management for each trading symbol. Interval Folders: Within the symbol folders, data are further categorized into subfolders based on the time interval of the price data, enabling precise dataset retrieval. Files: Each Excel file, named with a distinct timestamp indicating the data's download or last update time, contains the actual price data points.
Use Cases: Data Analysis: The structured data repository empowers researchers and analysts to conduct a broad spectrum of analytical tasks, such as trend analysis, statistical evaluation, and the development of algorithmic trading strategies.
Facebook
TwitterThis dataset was created by Truong Dai
Facebook
TwitterThis dataset illustrates customer data from bike sales. It contains information such as Income, Occupation, Age, Commute, Gender, Children, and more. This is fictional data, created and used for data exploration and cleaning.
The link for the Excel project to download can be found on GitHub here. It includes the raw data, the cleaned data, Pivot Tables, and a dashboard with Pivot Charts and Slicers for interaction. This allows the interactive dashboard to filter by Marital Status, Region, and Education.
Below is a screenshot of the dashboard for ease.
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F12904052%2Fcbc9db6fe00f3201c64e4fdb668ce9d1%2FBikeBuyers%20Dashboard%20Image.png?generation=1686186378985936&alt=media" alt="">
Facebook
TwitterTypically e-commerce datasets are proprietary and consequently hard to find among publicly available data. However, The UCI Machine Learning Repository has made this dataset containing actual transactions from 2010 and 2011. The dataset is maintained on their site, where it can be found by the title "Online Retail".
"This is a transnational data set which contains all the transactions occurring between 01/12/2010 and 09/12/2011 for a UK-based and registered non-store online retail.The company mainly sells unique all-occasion gifts. Many customers of the company are wholesalers."
Per the UCI Machine Learning Repository, this data was made available by Dr Daqing Chen, Director: Public Analytics group. chend '@' lsbu.ac.uk, School of Engineering, London South Bank University, London SE1 0AA, UK.
Image from stocksnap.io.
Analyses for this dataset could include time series, clustering, classification and more.
Facebook
Twitterhttp://opendatacommons.org/licenses/dbcl/1.0/http://opendatacommons.org/licenses/dbcl/1.0/
In the case study titled "Blinkit: Grocery Product Analysis," a dataset called 'Grocery Sales' contains 12 columns with information on sales of grocery items across different outlets. Using Tableau, you as a data analyst can uncover customer behavior insights, track sales trends, and gather feedback. These insights will drive operational improvements, enhance customer satisfaction, and optimize product offerings and store layout. Tableau enables data-driven decision-making for positive outcomes at Blinkit.
The table Grocery Sales is a .CSV file and has the following columns, details of which are as follows:
• Item_Identifier: A unique ID for each product in the dataset. • Item_Weight: The weight of the product. • Item_Fat_Content: Indicates whether the product is low fat or not. • Item_Visibility: The percentage of the total display area in the store that is allocated to the specific product. • Item_Type: The category or type of product. • Item_MRP: The maximum retail price (list price) of the product. • Outlet_Identifier: A unique ID for each store in the dataset. • Outlet_Establishment_Year: The year in which the store was established. • Outlet_Size: The size of the store in terms of ground area covered. • Outlet_Location_Type: The type of city or region in which the store is located. • Outlet_Type: Indicates whether the store is a grocery store or a supermarket. • Item_Outlet_Sales: The sales of the product in the particular store. This is the outcome variable that we want to predict.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
📊 NIFTY500 Stocks Data
Get comprehensive historical stock data for the NIFTY 500 index! 📈 This dataset includes stock prices at various time intervals for each NIFTY 500 company, organized into Excel files.
Each sheet in an Excel file contains: - 📅 Date: Date and time - 🏦 Open: Opening price - 📈 High: Highest price - 📉 Low: Lowest price - 🔒 Close: Closing price - 🔄 Volume: Shares traded
Ideal for: - Financial analysts 📊 - Data scientists 🤖 - Market researchers 🔍 Analyze stock trends, develop trading strategies, or conduct research with varied timeframes for both long-term and short-term analysis.
Happy analyzing! 📊🚀
Facebook
TwitterThis dataset was created by Hidayah Mohd Isa
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
TwitterWith this add in it is possible to create map templates from GIS files in KML format, and create choropleths with them. Providing you have access to KML format map boundary files, it is possible to create your own quick and easy choropleth maps in Excel. The KML format files can be converted from 'shape' files. Many shape files are available to download for free from the web, including from Ordnance Survey and the London Datastore. Standard mapping packages such as QGIS (free to download) and ArcGIS can convert the files to KML format. A sample of a KML file (London wards) can be downloaded from this page, so that users can easily test the tool out. Macros must be enabled for the tool to function. When creating the map using the Excel tool, the 'unique ID' should normally be the area code, the 'Name' should be the area name and then if required and there is additional data in the KML file, further 'data' fields can be added. These columns will appear below and to the right of the map. If not, data can be added later on next to the codes and names. In the add-in version of the tool the final control, 'Scale (% window)' should not normally be changed. With the default value 0.5, the height of the map is set to be half the total size of the user's Excel window. To run a choropleth, select the menu option 'Run Choropleth' to get this form. To specify the colour ramp for the choropleth, the user needs to enter the number of boxes into which the range is to be divided, and the colours for the high and low ends of the range, which is done by selecting coloured option boxes as appropriate. If wished, hit the 'Swap' button to change which colours are for the different ends of the range. Then hit the 'Choropleth' button. The default options for the colours of the ends of the choropleth colour range are saved in the add in, but different values can be selected but setting up a column range of up to twelve cells, anywhere in Excel, filled with the option colours wanted. Then use the 'Colour range' control to select this range, and hit apply, having selected high or low values as wished. The button 'Copy' sets up a sheet 'ColourRamp' in the active workbook with the default colours, which can just be extended or deleted with just a few cells, so saving the user time. The add-in was developed entirely within the Excel VBA IDE by Tim Lund. He is kindly distributing the tool for free on the Datastore but suggests that users who find the tool useful make a donation to the Shelter charity. It is not intended to keep the actively maintained, but if any users or developers would like to add more features, email the author. Acknowledgments Calculation of Excel freeform shapes from latitudes and longitudes is done using calculations from the Ordnance Survey.