Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
This synthetic dataset contains 100 user event logs with a focus on datetime handling in Python. The timestamp column is intentionally stored as an object (string) to help learners practice:
Converting strings to datetime objects using pd.to_datetime
Extracting features like hour, day, weekday, etc.
Handling datetime formatting and manipulation
Performing time-based grouping and filtering
🧪 Ideal for:
Python beginners
Pandas learners
Data wrangling practice
Building beginner Kaggle notebooks
💡 Tip: Start by converting the timestamp to datetime format and see what insights you can extract from user behavior!
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
COVID-2019 has been recognized as a global threat, and several studies are being conducted in order to contribute to the fight and prevention of this pandemic. This work presents a scholarly production dataset focused on COVID-19, providing an overview of scientific research activities, making it possible to identify countries, scientists and research groups most active in this task force to combat the coronavirus disease. The dataset is composed of 40,212 records of articles' metadata collected from Scopus, PubMed, arXiv and bioRxiv databases from January 2019 to July 2020. Those data were extracted by using the techniques of Python Web Scraping and preprocessed with Pandas Data Wrangling.
Data set consisting of data joined for analyzing the SBIR/STTR program. Data consists of individual awards and agency-level observations. The R and python code required for pulling, cleaning, and creating useful data sets has been included. Allard_Get and Clean Data.R This file provides the code for getting, cleaning, and joining the numerous data sets that this project combined. This code is written in the R language and can be used in any R environment running R 3.5.1 or higher. If the other files in this Dataverse are downloaded to the working directory, then this Rcode will be able to replicate the original study without needing the user to update any file paths. Allard SBIR STTR WebScraper.py This is the code I deployed to multiple Amazon EC2 instances to scrape data o each individual award in my data set, including the contact info and DUNS data. Allard_Analysis_APPAM SBIR project Forthcoming Allard_Spatial Analysis Forthcoming Awards_SBIR_df.Rdata This unique data set consists of 89,330 observations spanning the years 1983 - 2018 and accounting for all eleven SBIR/STTR agencies. This data set consists of data collected from the Small Business Administration's Awards API and also unique data collected through web scraping by the author. Budget_SBIR_df.Rdata 246 observations for 20 agencies across 25 years of their budget-performance in the SBIR/STTR program. Data was collected from the Small Business Administration using the Annual Reports Dashboard, the Awards API, and an author-designed web crawler of the websites of awards. Solicit_SBIR-df.Rdata This data consists of observations of solicitations published by agencies for the SBIR program. This data was collected from the SBA Solicitations API. Primary Sources Small Business Administration. “Annual Reports Dashboard,” 2018. https://www.sbir.gov/awards/annual-reports. Small Business Administration. “SBIR Awards Data,” 2018. https://www.sbir.gov/api. Small Business Administration. “SBIR Solicit Data,” 2018. https://www.sbir.gov/api.
https://cdla.io/sharing-1-0/https://cdla.io/sharing-1-0/
Hello all,
This dataset is my humble attempt to allow myself and others to upgrade essential python packages to their latest versions. This dataset contains the .whl files of the below packages to be used across general kernels and especially in internet-off code challenges-
Package | Version | Functionality |
---|---|---|
AutoGluon | 1.0.0 | AutoML models |
Catboost | 1.2.2 1.2.3 | ML models |
Iterative-Stratification | 0.1.7 | Iterative stratification for multi-label classifiers |
Joblib | 1.3.2 | File dumping and retrieval |
LAMA | 0.3.8b1 | AutoML models |
LightGBM | 4.3.0 4.2.0 4.1.0 | ML models |
MAPIE | 0.8.2 | Quantile regression |
Numpy | 1.26.3 | Data wrangling |
Pandas | 2.1.4 | Data wrangling |
Polars | 0.20.3 0.20.4 | Data wrangling |
PyTorch | 2.0.1 | Neural networks |
PyTorch-TabNet | 4.1.0 | Neural networks |
PyTorch-Forecast | 0.7.0 | Neural networks |
Pygwalker | 0.3.20 | Data wrangling and visualization |
Scikit-learn | 1.3.2 1.4.0 | ML Models/ Pipelines/ Data wrangling |
Scipy | 1.11.4 | Data wrangling/ Statistics |
TabPFN | 10.1.9 | ML models |
Torch-Frame | 1.7.5 | Neural Networks |
TorchVision | 0.15.2 | Neural Networks |
XGBoost | 2.0.2 2.0.1 2.0.3 | ML models |
I plan to update this dataset with more libraries and later versions as they get upgraded in due course. I hope these wheel files are useful to one and all.
Best regards and happy learning and coding!
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
Dataset Card for DS Coder Instruct Dataset
DS Coder is a dataset for instruction fine tuning of language models. It is a specialized dataset focusing only on data science (eg. plotting, data wrangling, machine learnig models, deep learning, and numerical computations). The dataset contains code examples both in R and Python. The goal of this dataset is to enable creation of small-scale, specialized language model assistants for data science projects.
Dataset Details… See the full description on the dataset page: https://huggingface.co/datasets/ed001/ds-coder-instruct-v1.
Netflix Dataset Exploration and Visualization
This project involves an in-depth analysis of the Netflix dataset to uncover key trends and patterns in the streaming platform’s content offerings. Using Python libraries such as Pandas, NumPy, and Matplotlib, this notebook visualizes and interprets critical insights from the data.
Objectives:
Analyze the distribution of content types (Movies vs. TV Shows)
Identify the most prolific countries producing Netflix content
Study the ratings and duration of shows
Handle missing values using techniques like interpolation, forward-fill, and custom replacements
Enhance readability with bar charts, horizontal plots, and annotated visuals
Key Visualizations:
Bar charts for type distribution and country-wise contributions
Handling missing data in rating, duration, and date_added
Annotated plots showing values for clarity
Tools Used:
Python 3
Pandas for data wrangling
Matplotlib for visualizations
Jupyter Notebook for hands-on analysis
Outcome: This project provides a clear view of Netflix's content library, helping data enthusiasts and beginners understand how to process, clean, and visualize real-world datasets effectively.
Feel free to fork, adapt, and extend the work.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data collected by E. Hunting et al. comprising video footage and electric field recordings from a video camera and field mill respectively. Data wrangling was done by K. Manser, the author of the python script.
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
New case New case (7 day rolling average) Recovered Active case Local cases Imported case ICU Death Cumulative deaths
People tested Cumulative people tested Positivity rate Positivity rate (7 day rolling average)
Column 1 to 22 are Twitter data, which the Tweets are retrieved from Health DG @DGHisham timeline with Twitter API. A typical covid situation update Tweet is written in a relatively fixed format. Data wrangling are done in Python/Pandas, numerical values extracted with Regular Expression (RegEx). Missing data are added manually from Desk of DG (kpkesihatan).
Column 23 ['remark'] is my own written remark regarding the Tweet status/content.
Column 24 ['Cumulative people tested'] data is transcribed from an image on MOH COVID-19 website. Specifically, the first image under TABURAN KES section in each Situasi Terkini daily webpage of http://covid-19.moh.gov.my/terkini. If missing, the image from CPRC KKM Telegram or KKM Facebook Live video is used. Data in this column, dated from 1 March 2020 to 11 Feb 2021, are from Our World in Data, their data collection method as stated here.
MOH does not publish any covid data in csv/excel format as of today, they provide the data as is, along with infographics that are hardly informative. In an undisclosed email, MOH doesn't seem to understand my request for them to release the covid public health data for anyone to download and do their analysis if they do wish.
A simple visualization dashboard is now published on Tableau Public. It's is updated daily. Do check it out! More charts to be added in the near future
Create better visualizations to help fellow Malaysians understand the Covid-19 situation. Empower the data science community.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
This synthetic dataset contains 100 user event logs with a focus on datetime handling in Python. The timestamp column is intentionally stored as an object (string) to help learners practice:
Converting strings to datetime objects using pd.to_datetime
Extracting features like hour, day, weekday, etc.
Handling datetime formatting and manipulation
Performing time-based grouping and filtering
🧪 Ideal for:
Python beginners
Pandas learners
Data wrangling practice
Building beginner Kaggle notebooks
💡 Tip: Start by converting the timestamp to datetime format and see what insights you can extract from user behavior!