Facebook
Twitterhttps://webtechsurvey.com/termshttps://webtechsurvey.com/terms
A complete list of live websites using the Import Users From Csv With Meta technology, compiled through global website indexing conducted by WebTechSurvey.
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
Project Description:
Title: Pandas Data Manipulation and File Conversion
Overview: This project aims to demonstrate the basic functionalities of Pandas, a powerful data manipulation library in Python. In this project, we will create a DataFrame, perform some data manipulation operations using Pandas, and then convert the DataFrame into both Excel and CSV formats.
Key Objectives:
Tools and Libraries Used:
Project Implementation:
DataFrame Creation:
Data Manipulation:
File Conversion:
to_excel() function.to_csv() function.Expected Outcome:
Upon completion of this project, you will have gained a fundamental understanding of how to work with Pandas DataFrames, perform basic data manipulation tasks, and convert DataFrames into different file formats. This knowledge will be valuable for data analysis, preprocessing, and data export tasks in various data science and analytics projects.
Conclusion:
The Pandas library offers powerful tools for data manipulation and file conversion in Python. By completing this project, you will have acquired essential skills that are widely applicable in the field of data science and analytics. You can further extend this project by exploring more advanced Pandas functionalities or integrating it into larger data processing pipelines.in this data we add number of data and make that data a data frame.and save in single excel file as different sheet name and then convert that excel file in csv file .
Facebook
Twitterhttps://webtechsurvey.com/termshttps://webtechsurvey.com/terms
A complete list of live websites using the AIT CSV Import / Export technology, compiled through global website indexing conducted by WebTechSurvey.
Facebook
Twitterhttps://data.gov.tw/licensehttps://data.gov.tw/license
Provide "Statistics of Import and Export Trade Volume of Each Park" to let the public understand the import and export and its growth trend of each park. In addition to updating this information every month, CSV file format is also provided for free download and use by the public.The dataset includes statistics on the import and export trade volume of parks such as Nanzih, Kaohsiung, Taichung, Zhonggang, Pingtung, and other parks (Lingguang, Chenggong, Gaoruan), with main fields including "Park, Import and Export (This Month, Year-to-Date)", "Export (This Month, Year-to-Date)", "Import (This Month, Year-to-Date)", and other important information.
Facebook
TwitterCsv Marketing Export Import Data. Follow the Eximpedia platform for HS code, importer-exporter records, and customs shipment details.
Facebook
TwitterThis dataset was create using hand signs in images and made the landmarks of the same were made into the attributes of the dataset, contains all 21 landmarks of with each coordinate(x,y,z) and 5 classes(1,2,3,4,5).
You can also add more classes to your dataset by running the following code, make sure to create an empty dataset or append to the dataset here and set the file path correctly
import numpy as np import pandas as pd import matplotlib.pyplot as plt import mediapipe as mp import cv2 import os
for t in range(1,6): path = 'data/'+str(t)+'/' images = os.listdir(path) for i in images: image = cv2.imread(path+i) mp_hands = mp.solutions.hands hands = mp_hands.Hands(static_image_mode=False,max_num_hands=1,min_detection_confidence=0.8,min_tracking_confidence=0.8) mp_draw = mp.solutions.drawing_utils image = cv2.cvtColor(image,cv2.COLOR_BGR2RGB) image.flags.writeable=False results = hands.process(image) image.flags.writeable=True ``` if results.multi_hand_landmarks:
for hand_no, hand_landmarks in enumerate(results.multi_hand_landmarks):
mp_draw.draw_landmarks(image = image, landmark_list = hand_landmarks,
connections = mp_hands.HAND_CONNECTIONS)
a = dict()
a['label'] = t
for i in range(21):
s = ('x','y','z')
k = (hand_landmarks.landmark[i].x,hand_landmarks.landmark[i].y,hand_landmarks.landmark[i].z)
for j in range(len(k)):
a[str(mp_hands.HandLandmark(i).name)+'_'+str(s[j])] = k[j]
df = df.append(a,ignore_index=True)
Facebook
Twitterhttps://www.gnu.org/licenses/gpl-3.0.htmlhttps://www.gnu.org/licenses/gpl-3.0.html
Data pulled from Traffy Fondue, by accessing the Traffy Fondue Open API. Date January 2022 until January 2025
The following code pulled the data:
import os
import json
import requests
from datetime import datetime, timedelta
import time
class TraffyDataFetcher:
def _init_(self, start_date, subfolder='traffyfonduedata'):
self.url = "https://publicapi.traffy.in.th/share/teamchadchart/search"
self.query = {'offset': '0'}
self.payload = {}
self.headers = {}
self.start_date = datetime.strptime(start_date, '%Y-%m-%d')
self.end_date = datetime.now()
self.subfolder = subfolder
self.max_requests_per_minute = 99
if not os.path.exists(self.subfolder):
os.makedirs(self.subfolder)
def add_days_to_date(self, start_date_str, days_to_add):
start_date = datetime.strptime(start_date_str, '%Y-%m-%d')
new_date = start_date + timedelta(days=days_to_add)
return new_date.strftime('%Y-%m-%d')
def fetch_data(self):
current_date = self.start_date
index = 0
while current_date <= self.end_date:
start_time = datetime.now()
self.query['start'] = current_date.strftime('%Y-%m-%d')
new_date = self.add_days_to_date(self.query['start'], 10)
self.query['end'] = new_date
response = requests.request("GET", self.url, headers=self.headers, data=self.payload, params=self.query)
print(f"offset: {index} response: {response.status_code}")
filename = f"traffy_{current_date.strftime('%Y-%m-%d')}.json"
file_path = os.path.join(self.subfolder, filename)
with open(file_path, "w") as outfile:
json_object = json.dumps(response.json(), indent=4)
outfile.write(json_object)
end_time = datetime.now()
elapsed_time = (end_time - start_time).total_seconds()
print(f"Elapsed time: {elapsed_time} s")
index += 950
current_date = datetime.strptime(new_date, '%Y-%m-%d') + timedelta(days=1)
if index % self.max_requests_per_minute == 0:
time.sleep(60 - elapsed_time)
if _name_ == "_main_":
fetcher = TraffyDataFetcher(start_date='2022-01-01')
fetcher.fetch_data()
--
And the following code converted the json to CSV files
import os
import glob
import json
import pandas as pd
#import numpy as np
class TraffyJSONFixer:
def _init_(self, path_to_json='*.json', subfolder='traffyfonduedata'):
self.path_to_json = path_to_json
self.subfolder = subfolder
self.outputfolder = 'fixedjson'
self.excelfolder = 'exceloutput'
self.file_path = os.path.join(self.subfolder, self.path_to_json)
self.json_files = glob.glob(self.file_path)
# Ensure the subfolder exists
if not os.path.exists(self.subfolder):
os.makedirs(self.subfolder)
# Ensure the outputfolder exists
if not os.path.exists(self.outputfolder):
os.makedirs(self.outputfolder)
# Ensure the excelfolder exists
if not os.path.exists(self.excelfolder):
os.makedirs(self.excelfolder)
# Debugging: Print the current working directory and the list of JSON files
print(f"Current working directory: {os.getcwd()}")
print(f"Found JSON files: {self.json_files}")
def fix_json_files(self):
for count, ele in enumerate(self.json_files):
new_file_name = os.path.join(self.outputfolder, f"data_{os.path.basename(ele)}")
try:
with open(ele, 'r', encoding='utf-8') as f:
data = json.load(f)
# Debugging: Print the type of data
print(f"Processing file: {ele}")
print(f"Type of data: {type(data)}")
# Handle different JSON structures
if isinstance(data, dict) and "results" in data:
results = data["results"]
elif isinstance(data, list):
results = data
else:
print(f"Unexpected JSON structure in file: {ele}")
continue
# Ensure results is a list or dict before writing
if isinstance(results, (list, dict)):
with open(new_file_name, 'w', encoding='utf-8') as f:
f.write(json.dumps(results, indent=4))
else:
print(f"Unexpected type for results in file: {ele}")
except (json.JSONDecodeError, KeyError) as e:
print(f"Error processing file {ele}: {e}")
def jsontoexcel(self):
jsonfile_path = os.path.join(self.out...
Facebook
Twitterhttps://www.gnu.org/licenses/old-licenses/gpl-2.0-standalone.htmlhttps://www.gnu.org/licenses/old-licenses/gpl-2.0-standalone.html
Replication pack, FSE2018 submission #164: ------------------------------------------
**Working title:** Ecosystem-Level Factors Affecting the Survival of Open-Source Projects: A Case Study of the PyPI Ecosystem **Note:** link to data artifacts is already included in the paper. Link to the code will be included in the Camera Ready version as well. Content description =================== - **ghd-0.1.0.zip** - the code archive. This code produces the dataset files described below - **settings.py** - settings template for the code archive. - **dataset_minimal_Jan_2018.zip** - the minimally sufficient version of the dataset. This dataset only includes stats aggregated by the ecosystem (PyPI) - **dataset_full_Jan_2018.tgz** - full version of the dataset, including project-level statistics. It is ~34Gb unpacked. This dataset still doesn't include PyPI packages themselves, which take around 2TB. - **build_model.r, helpers.r** - R files to process the survival data (`survival_data.csv` in **dataset_minimal_Jan_2018.zip**, `common.cache/survival_data.pypi_2008_2017-12_6.csv` in **dataset_full_Jan_2018.tgz**) - **Interview protocol.pdf** - approximate protocol used for semistructured interviews. - LICENSE - text of GPL v3, under which this dataset is published - INSTALL.md - replication guide (~2 pages)
Replication guide ================= Step 0 - prerequisites ---------------------- - Unix-compatible OS (Linux or OS X) - Python interpreter (2.7 was used; Python 3 compatibility is highly likely) - R 3.4 or higher (3.4.4 was used, 3.2 is known to be incompatible) Depending on detalization level (see Step 2 for more details): - up to 2Tb of disk space (see Step 2 detalization levels) - at least 16Gb of RAM (64 preferable) - few hours to few month of processing time Step 1 - software ---------------- - unpack **ghd-0.1.0.zip**, or clone from gitlab: git clone https://gitlab.com/user2589/ghd.git git checkout 0.1.0 `cd` into the extracted folder. All commands below assume it as a current directory. - copy `settings.py` into the extracted folder. Edit the file: * set `DATASET_PATH` to some newly created folder path * add at least one GitHub API token to `SCRAPER_GITHUB_API_TOKENS` - install docker. For Ubuntu Linux, the command is `sudo apt-get install docker-compose` - install libarchive and headers: `sudo apt-get install libarchive-dev` - (optional) to replicate on NPM, install yajl: `sudo apt-get install yajl-tools` Without this dependency, you might get an error on the next step, but it's safe to ignore. - install Python libraries: `pip install --user -r requirements.txt` . - disable all APIs except GitHub (Bitbucket and Gitlab support were not yet implemented when this study was in progress): edit `scraper/init.py`, comment out everything except GitHub support in `PROVIDERS`. Step 2 - obtaining the dataset ----------------------------- The ultimate goal of this step is to get output of the Python function `common.utils.survival_data()` and save it into a CSV file: # copy and paste into a Python console from common import utils survival_data = utils.survival_data('pypi', '2008', smoothing=6) survival_data.to_csv('survival_data.csv') Since full replication will take several months, here are some ways to speedup the process: ####Option 2.a, difficulty level: easiest Just use the precomputed data. Step 1 is not necessary under this scenario. - extract **dataset_minimal_Jan_2018.zip** - get `survival_data.csv`, go to the next step ####Option 2.b, difficulty level: easy Use precomputed longitudinal feature values to build the final table. The whole process will take 15..30 minutes. - create a folder `
Facebook
Twitterhttps://crawlfeeds.com/privacy_policyhttps://crawlfeeds.com/privacy_policy
The Dog Food Data Extracted from Chewy (USA) dataset contains 4,500 detailed records of dog food products sourced from one of the leading pet supply platforms in the United States, Chewy. This dataset is ideal for businesses, researchers, and data analysts who want to explore and analyze the dog food market, including product offerings, pricing strategies, brand diversity, and customer preferences within the USA.
The dataset includes essential information such as product names, brands, prices, ingredient details, product descriptions, weight options, and availability. Organized in a CSV format for easy integration into analytics tools, this dataset provides valuable insights for those looking to study the pet food market, develop marketing strategies, or train machine learning models.
Key Features:
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
LifeSnaps Dataset Documentation
Ubiquitous self-tracking technologies have penetrated various aspects of our lives, from physical and mental health monitoring to fitness and entertainment. Yet, limited data exist on the association between in the wild large-scale physical activity patterns, sleep, stress, and overall health, and behavioral patterns and psychological measurements due to challenges in collecting and releasing such datasets, such as waning user engagement, privacy considerations, and diversity in data modalities. In this paper, we present the LifeSnaps dataset, a multi-modal, longitudinal, and geographically-distributed dataset, containing a plethora of anthropological data, collected unobtrusively for the total course of more than 4 months by n=71 participants, under the European H2020 RAIS project. LifeSnaps contains more than 35 different data types from second to daily granularity, totaling more than 71M rows of data. The participants contributed their data through numerous validated surveys, real-time ecological momentary assessments, and a Fitbit Sense smartwatch, and consented to make these data available openly to empower future research. We envision that releasing this large-scale dataset of multi-modal real-world data, will open novel research opportunities and potential applications in the fields of medical digital innovations, data privacy and valorization, mental and physical well-being, psychology and behavioral sciences, machine learning, and human-computer interaction.
The following instructions will get you started with the LifeSnaps dataset and are complementary to the original publication.
Data Import: Reading CSV
For ease of use, we provide CSV files containing Fitbit, SEMA, and survey data at daily and/or hourly granularity. You can read the files via any programming language. For example, in Python, you can read the files into a Pandas DataFrame with the pandas.read_csv() command.
Data Import: Setting up a MongoDB (Recommended)
To take full advantage of the LifeSnaps dataset, we recommend that you use the raw, complete data via importing the LifeSnaps MongoDB database.
To do so, open the terminal/command prompt and run the following command for each collection in the DB. Ensure you have MongoDB Database Tools installed from here.
For the Fitbit data, run the following:
mongorestore --host localhost:27017 -d rais_anonymized -c fitbit
For the SEMA data, run the following:
mongorestore --host localhost:27017 -d rais_anonymized -c sema
For surveys data, run the following:
mongorestore --host localhost:27017 -d rais_anonymized -c surveys
If you have access control enabled, then you will need to add the --username and --password parameters to the above commands.
Data Availability
The MongoDB database contains three collections, fitbit, sema, and surveys, containing the Fitbit, SEMA3, and survey data, respectively. Similarly, the CSV files contain related information to these collections. Each document in any collection follows the format shown below:
{
_id:
Facebook
TwitterCsv Pharmaceuticals India Private Limited Export Import Data. Follow the Eximpedia platform for HS code, importer-exporter records, and customs shipment details.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains results and additional data related to the publication "Import options for chemical energy carriers from renewable sources to Germany".
Files containing major results / important cost input data:
results.csv: Contains major model results for all scenarios as CSV file (seperator is ';', all fields are quotes using double quotation marks '"'). Can be explored using standard software like Excel/Libre Office or other tools.
costs.zip: Technology specific input cost assumption for 2030, 2040 and 2050.
The dataset further contains the following archives related to the model structure as contained in the software repository (GitHub):
config.zip: File contents of the config/ folder of the model directory. Configuration files for running the model used by the publication.
data.zip: File contents of the data/ folder of the model directory. Includes distance specifications, conversion efficiencies, details on shipping transport. Also contains (with this version) the cost data (same as in costs.zip).
resources.zip: Some file contents of the resources/ folder of the model directory. Most files in this folder are automatically recreated if the Snakemake workflow is executed. The files in this archive are the files created by GlobalEnergyGIS (RES supply time-series and demand data for investigated regions) which is difficult to setup and are thus provided here as an optional dataset for download.
results.zip: Optimised energy system models (PyPSA networks, for PyPSA version v0.19.3) for all scenarios (default 10% WACC, optimistic 5% WACC, scenarios for sensitivity analysis), energy supply chains (ESCs) and exporting countries. For each network an additional results.csv exists containing a number of key results extracted from each network. Also contains the combined results.csv file as results/results.csv for all scenario runs.
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
Dataset Card for Demo
Dataset Summary
This is a demo dataset with two files train.csv and test.csv. Load it by: from datasets import load_dataset data_files = {"train": "train.csv", "test": "test.csv"} demo = load_dataset("stevhliu/demo", data_files=data_files)
Supported Tasks and Leaderboards
[More Information Needed]
Languages
[More Information Needed]
Dataset Structure
Data Instances
[More Information… See the full description on the dataset page: https://huggingface.co/datasets/Axion004/my-awesome-dataset.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
IMPORTANT: to import the CSV file, remember to specify the use of ';' semicolon separator
I hope you can do:
This dataset was obtained through a Web Scraping project from the website www.chileautos.cl and contains information on 1012 used Mazda car listings.
As it was created for academic and educational purposes, and considering that the website offers a large number of listings, I decided to only collect data for the mentioned brand because, who doesn't like Mazdas?
This dataset contains the following columns:
Facebook
TwitterCsv Investments Private Limited Export Import Data. Follow the Eximpedia platform for HS code, importer-exporter records, and customs shipment details.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is a local CSV file of WormJam (https://www.tandfonline.com/doi/full/10.1080/21624054.2017.1373939) for MetFrag (https://msbi.ipb-halle.de/MetFrag/).
The text file provided by Michael (also part of this dataset) was modified into CSV by adding identifiers and adjusting headers for MetFrag import.
This CSV file is for users wanting to integrate WormJam into MetFrag CL workflows (offline), this file will be integrated into MetFrag online; please use the file in the dropdown menu rather than uploading this one.
Update 10 Sept 2019: curated truncated InChIKey, InChI entries, added missing SMILES, added DTXSIDs by InChIKey match.
Facebook
Twitterhttps://choosealicense.com/licenses/odbl/https://choosealicense.com/licenses/odbl/
This is The initial dataset we scraped from open maps
this dataset has not been cleaned yet be aware!
!pip install requests
script
import csv import time import requests from urllib.parse import quote
OUT_CSV = "jabodetabek_sports_osm.csv"
BBOX = (-6.80, 106.30, -5.90, 107.20)
OVERPASS_URL = "https://overpass-api.de/api/interpreter" WIKIDATA_ENTITY_URL = "https://www.wikidata.org/wiki/Special:EntityData/{qid}.json"
FETCH_WIKIDATA_IMAGES =… See the full description on the dataset page: https://huggingface.co/datasets/Shiowo2/Initial-Data-FitMatrix.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Credit report of Csv Eglilio Llc contains unique and detailed export import market intelligence with it's phone, email, Linkedin and details of each import and export shipment like product, quantity, price, buyer, supplier names, country and date of shipment.
Facebook
TwitterThis dataset includes all the data and R code needed to reproduce the analyses in a forthcoming manuscript:Copes, W. E., Q. D. Read, and B. J. Smith. Environmental influences on drying rate of spray applied disinfestants from horticultural production services. PhytoFrontiers, DOI pending.Study description: Instructions for disinfestants typically specify a dose and a contact time to kill plant pathogens on production surfaces. A problem occurs when disinfestants are applied to large production areas where the evaporation rate is affected by weather conditions. The common contact time recommendation of 10 min may not be achieved under hot, sunny conditions that promote fast drying. This study is an investigation into how the evaporation rates of six commercial disinfestants vary when applied to six types of substrate materials under cool to hot and cloudy to sunny weather conditions. Initially, disinfestants with low surface tension spread out to provide 100% coverage and disinfestants with high surface tension beaded up to provide about 60% coverage when applied to hard smooth surfaces. Disinfestants applied to porous materials were quickly absorbed into the body of the material, such as wood and concrete. Even though disinfestants evaporated faster under hot sunny conditions than under cool cloudy conditions, coverage was reduced considerably in the first 2.5 min under most weather conditions and reduced to less than or equal to 50% coverage by 5 min. Dataset contents: This dataset includes R code to import the data and fit Bayesian statistical models using the model fitting software CmdStan, interfaced with R using the packages brms and cmdstanr. The models (one for 2022 and one for 2023) compare how quickly different spray-applied disinfestants dry, depending on what chemical was sprayed, what surface material it was sprayed onto, and what the weather conditions were at the time. Next, the statistical models are used to generate predictions and compare mean drying rates between the disinfestants, surface materials, and weather conditions. Finally, tables and figures are created. These files are included:Drying2022.csv: drying rate data for the 2022 experimental runWeather2022.csv: weather data for the 2022 experimental runDrying2023.csv: drying rate data for the 2023 experimental runWeather2023.csv: weather data for the 2023 experimental rundisinfestant_drying_analysis.Rmd: RMarkdown notebook with all data processing, analysis, and table creation codedisinfestant_drying_analysis.html: rendered output of notebookMS_figures.R: additional R code to create figures formatted for journal requirementsfit2022_discretetime_weather_solar.rds: fitted brms model object for 2022. This will allow users to reproduce the model prediction results without having to refit the model, which was originally fit on a high-performance computing clusterfit2023_discretetime_weather_solar.rds: fitted brms model object for 2023data_dictionary.xlsx: descriptions of each column in the CSV data files
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
Wikipedia Prompts
Created by combining a gpt-4o-mini request from a Wikipedia's API function, this generated a short 75 word prompt. Along with the title from the randomly generated article. This current version has not been cleaned or pruned, so minor error in formating might exist, as well as duplications. Further versions will be numbered to show their improved formating. import requests import random import csv import time from openai import OpenAI from datetime import datetime… See the full description on the dataset page: https://huggingface.co/datasets/080-ai/20k_wikipedia_title_prompts.
Facebook
Twitterhttps://webtechsurvey.com/termshttps://webtechsurvey.com/terms
A complete list of live websites using the Import Users From Csv With Meta technology, compiled through global website indexing conducted by WebTechSurvey.