https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset contains detailed information about all cards available in the Pokémon Trading Card Game Pocket mobile app. The data has been carefully curated and cleaned to provide Pokémon enthusiasts and developers with accurate and comprehensive card information.
,
)Column | Description | Example |
---|---|---|
set_name | Full name of the card set | "Eevee Grove" |
set_code | Official set identifier | "a3b" |
set_release_date | Set release date | "June 26, 2025" |
set_total_cards | Total cards in the set | 107 |
pack_name | Name of the specific pack | "Eevee Grove" |
card_name | Full card name | "Leafeon" |
card_number | Card number within set | "2" |
card_rarity | Rarity classification | "Rare" |
card_type | Card type category | "Pokémon" |
If you find this dataset useful, consider giving it an upvote — it really helps others discover it too! 🔼😊
Happy analyzing! 🎯📊
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Contains:
US Census Bureau conducts American Census Survey 1 and 5 Yr surveys that record various demographics and provide public access through APIs. I have attempted to call the APIs through the python environment using the requests library, Clean, and organize the data in a usable format.
ACS Subject data [2011-2019] was accessed using Python by following the below API Link:
https://api.census.gov/data/2011/acs/acs1?get=group(B08301)&for=county:*
The data was obtained in JSON format by calling the above API, then imported as Python Pandas Dataframe. The 84 variables returned have 21 Estimate values for various metrics, 21 pairs of respective Margin of Error, and respective Annotation values for Estimate and Margin of Error Values. This data was then undergone through various cleaning processes using Python, where excess variables were removed, and the column names were renamed. Web-Scraping was carried out to extract the variables' names and replace the codes in the column names in raw data.
The above step was carried out for multiple ACS/ACS-1 datasets spanning 2011-2019 and then merged into a single Python Pandas Dataframe. The columns were rearranged, and the "NAME" column was split into two columns, namely 'StateName' and 'CountyName.' The counties for which no data was available were also removed from the Dataframe. Once the Dataframe was ready, it was separated into two new dataframes for separating State and County Data and exported into '.csv' format
More information about the source of Data can be found at the URL below:
US Census Bureau. (n.d.). About: Census Bureau API. Retrieved from Census.gov
https://www.census.gov/data/developers/about.html
I hope this data helps you to create something beautiful, and awesome. I will be posting a lot more databases shortly, if I get more time from assignments, submissions, and Semester Projects 🧙🏼♂️. Good Luck.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Con Espressione Game Dataset A piece of music can be expressively performed, or interpreted, in a variety of ways. With the help of an online questionnaire, the Con Espressione Game, we collected some 1,500 descriptions of expressive character relating to 45 performances of 9 excerpts from classical piano pieces, played by different famous pianists. More specifically, listeners were asked to describe, using freely chosen words (preferably: adjectives), how they perceive the expressive character of the different performances. The aim of this research is to find the dimensions of musical expression (in Western classical piano music) that can be attributed to a performance, as perceived and described in natural language by listeners. The Con Espressione Game was launched on the 3rd of April 2018. Dataset structure Listeners’ Descriptions of Expressive performance piece_performer_data.csv: A comma separated file (CSV) containing information about the pieces in the dataset. Strings are delimited with ". The columns in this file are: music_id: An integer ID for each performance in the dataset. performer_name: (Last) name of the performer. piece_name: (Short) name of the piece. performance_name: Name of the the performance. All files in different modalities (alignments, MIDI, loudness features, etc) corresponding to a single performance will have the same name (but possibly different extensions). composer: Name of the composer of the piece. piece: Full name of the piece. album: Name of the album. performer_name_full: Full name of the performer. year_of_CD_issue: Year of the issue of the CD. track_number: Number of the track in the CD. length_of_excerpt_seconds: Length of the excerpt in seconds. start_of_excerpt_seconds: Start of the excerpt in its corresponding track (in seconds). end_of_excerpt_seconds: End of the excerpt in its corresponding track (in seconds). con_espressione_game_answers.csv: This is the main file of the dataset which contains listener’s descriptions of expressive character. This CSV file contains the following columns: answer_id: An integer representing the ID of the answer. Each answer gets a unique ID. participant_id: An integer representing the ID of a participant. Answers with the same ID come from the same participant. music_id: An integer representing the ID of the performance. This is the same as the music_id in piece_performer_data.csv described above. answer: (cleaned/formatted) participant description. All answers have been written as lower-case, typos were corrected, spaces replaced by underscores (_) and individual terms are separated by commas. See cleanup_rules.txt for a more detailed description of how the answers were formatted. original_answer: Raw answers provided by the participants. timestamp: Timestamp of the answer. favorite: A boolean (0 or 1) indicating if this performance of the piece is the participant’s favorite. translated_to_english. Raw translation (from German, Russian, Spanish and Italian). performer. (Last) name of the performer. See piece_performer_data.csv described above. piece_name. (Short) name of the piece. See piece_performer_data.csv described above. performance_name. Name of the performance. See piece_performer_data.csv described above. participant_profiles.csv. A CSV file containing musical background information of the participants. Empty cells mean that the participant did not provide an answer. This file contains the following columns: participant_id: An integer representing the ID of a participant. music_education_years: (Self reported) number of years of musical education of the participants listening_to_classical_music: Answers to the question “How often do you listen to classical music?”. The possible answers are: 1: Never 2: Very rarely 3: Rarely 4: Occasionally 5: Frequently 6: Very frequently registration_date: Date and time of registration of the participant. playing_piano: Answer to the question “Do you play the piano?”. The possible answers are 1: No 2: A little bit 3: Quite well 4: Very well cleanup_rules.txt: Rules for cleaning/formatting the terms in the participant’s answers. translations_GERMAN.txt: How the translations from German to English were made. Metadata Related meta data is stored in the MetaData folder. Alignments. This folders contains the manually-corrected score-to-performance alignments for each of the pieces in the dataset. Each of these alignments is a text file. ApproximateMIDI. This folder contains reconstructed MIDI performances created from the alignments and the loudness curves. The onset time and offset times of the notes were determined from the alignment times and the MIDI velocity was computed from the loudness curves. Match. This folder contains score-to-performance alignments in Matchfile format. Scores_MuseScore. Manually encoded sheet music in MuseScore format (.mscz) Scores_MusicXML. Sheet music in MusicXML format. Scores_pdf. Images of the sheet music in pdf format. Audio Features Audio features compu
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains the Europe PMC full text corpus, a collection of 300 articles from the Europe PMC Open Access subset. Each article contains 3 core entity types, manually annotated by curators: Gene/Protein, Disease and Organism.
Corpus Directory Structure
annotations/: contains annotations of the 300 full-text articles in the Europe PMC corpus. Annotations are provided in 3 different formats.
hypothesis/csv/: contains raw annotations fetched from the annotation platform Hypothes.is in comma-separated values (CSV) format.
GROUP0/: contains raw manual annotations made by curator GROUP0.
GROUP1/: contains raw manual annotations made by curator GROUP1.
GROUP2/: contains raw manual annotations made by curator GROUP2.
IOB/: contains automatically extracted annotations using raw manual annotations in hypothesis/csv/, which is in Inside–Outside–Beginning tagging format.
dev/: contains IOB format annotations of 45 articles, suppose to be used a dev set in machine learning task.
test/: contains IOB format annotations of 45 articles, suppose to be used a test set in machine learning task.
train/: contains IOB format annotations of 210 articles, suppose to be used a training set in machine learning task.
JSON/: contains automatically extracted annotations using raw manual annotations in hypothesis/csv/, which is in JSON format. README.md: a detailed description of all the annotation formats.
articles/: contains the full-text articles annotated in Europe PMC corpus.
Sentencised/: contains XML articles whose text has been split into sentences using the Europe PMC sentenciser. XML/: contains XML articles directly fetched using Europe PMC Article Restful API. README.md: a detailed description of the sentencising and fetching of XML articles.
docs/: contains related documents that were used for generating the corpus.
Annotation guideline.pdf: annotation guideline that is provided to curators to assist the manual annotation. demo to molecular conenctions.pdf: annotation platform guideline that is provided to curator to help them get familiar with the Hypothes.is platform. Training set development.pdf: initial document that details the paper selection procedures.
pilot/: contains annotations and articles that were used in a pilot study.
annotations/csv/: contains raw annotations fetched from the annotation platform Hypothes.is in comma-separated values (CSV) format. articles/: contains the full-text articles annotated in the pilot study.
Sentencised/: contains XML articles whose text has been split into sentences using the Europe PMC sentenciser.
XML/: contains XML articles directly fetched using Europe PMC Article Restful API.
README.md: a detailed description of the sentencising and fetching of XML articles.
src/: source codes for cleaning annotations and generating IOB files
metrics/ner_metrics.py: Python script contains SemEval evaluation metrics. annotations.py: Python script used to extract annotations from raw Hypothes.is annotations. generate_IOB_dataset.py: Python script used to convert JSON format annotations to IOB tagging format. generate_json_dataset.py: Python script used to extract annotations to JSON format. hypothesis.py: Python script used to fetch raw Hypothes.is annotations.
License
CCBY
Feedback
For any comment, question, and suggestion, please contact us through helpdesk@europepmc.org or Europe PMC contact page.
https://data.gov.tw/licensehttps://data.gov.tw/license
110 Annual comprehensive counter service statistics table
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
PECD Hydro modelling
This repository contains a more user-friendly version of the Hydro modelling data
released by ENTSO-E with their latest Seasonal Outlook.
The original URLs:
The original ENTSO-E hydropower dataset integrates the PECD (Pan-European Climate Database) released for the MAF 2019
As I did for the wind & solar data, the datasets released in this repository are only a more user- and machine-readable version of the original Excel files. As avid user of ENTSO-E data, with this repository I want to share my data wrangling efforts to make this dataset more accessible.
Data description
The zipped file contains 86 Excel files, two different files for each ENTSO-E zone.
In this repository you can find 5 CSV files:
PECD-hydro-capacities.csv
: installed capacitiesPECD-hydro-weekly-inflows.csv
: weekly inflows for reservoir and open-loop pumpingPECD-hydro-daily-ror-generation.csv
: daily run-of-river generationPECD-hydro-weekly-reservoir-min-max-generation.csv
: minimum and maximum weekly reservoir generationPECD-hydro-weekly-reservoir-min-max-levels.csv
: weekly minimum and maximum reservoir levelsCapacities
The file PECD-hydro-capacities.csv
contains: run of river capacity (MW) and storage capacity (GWh), reservoir plants capacity (MW) and storage capacity (GWh), closed-loop pumping/turbining (MW) and storage capacity and open-loop pumping/turbining (MW) and storage capacity. The data is extracted from the Excel files with the name starting with PEMM
from the following sections:
Run-of-River and pondage
, rows from 5 to 7, columns from 2 to 5Reservoir
, rows from 5 to 7, columns from 1 to 3Pump storage - Open Loop
, rows from 5 to 7, columns from 1 to 3Pump storage - Closed Loop
, rows from 5 to 7, columns from 1 to 3Inflows
The file PECD-hydro-weekly-inflows.csv
contains the weekly inflow (GWh) for the climatic years 1982-2017 for reservoir plants and open-loop pumping. The data is extracted from the Excel files with the name starting with PEMM
from the following sections:
Reservoir
, rows from 13 to 66, columns from 16 to 51Pump storage - Open Loop
, rows from 13 to 66, columns from 16 to 51Daily run-of-river
The file PECD-hydro-daily-ror-generation.csv
contains the daily run-of-river generation (GWh). The data is extracted from the Excel files with the name starting with PEMM
from the following sections:
Run-of-River and pondage
, rows from 13 to 378, columns from 15 to 51Miminum and maximum reservoir generation
The file PECD-hydro-weekly-reservoir-min-max-generation.csv
contains the minimum and maximum generation (MW, weekly) for reservoir-based plants for the climatic years 1982-2017. The data is extracted from the Excel files with the name starting with PEMM
from the following sections:
Reservoir
, rows from 13 to 66, columns from 196 to 231Reservoir
, rows from 13 to 66, columns from 232 to 267Minimum/Maximum reservoir levels
The file PECD-hydro-weekly-reservoir-min-max-levels.csv
contains the minimum/maximum reservoir levels at beginning of each week (scaled coefficient from 0 to 1). The data is extracted from the Excel files with the name starting with PEMM
from the following sections:
Reservoir
, rows from 14 to 66, column 12Reservoir
, rows from 14 to 66, column 13CHANGELOG
[2020/07/17] Added maximum generation for the reservoir
Measurements of full-spectrum (350-2500 nm) canopy spectral reflectance of Arctic plant species at the Teller and Kougarok NGEE-Arctic sites, Seward Peninsula, Alaska. Spectra were collected in July 2016 using an SVC HR-2014i spectroradiometer together with a Spectralon white plate to calibrate each measurement under variable illumination conditions. The locations of the 43 measurement targets are provided as latitude and longitude recorded by the spectroradiometer internal GPS. This data package comprises .csv data and metadata files, and the SVC instrument output (.sig in .zip). The Next-Generation Ecosystem Experiments: Arctic (NGEE Arctic), was a research effort to reduce uncertainty in Earth System Models by developing a predictive understanding of carbon-rich Arctic ecosystems and feedbacks to climate. NGEE Arctic was supported by the Department of Energy's Office of Biological and Environmental Research. The NGEE Arctic project had two field research sites: 1) located within the Arctic polygonal tundra coastal region on the Barrow Environmental Observatory (BEO) and the North Slope near Utqiagvik (Barrow), Alaska and 2) multiple areas on the discontinuous permafrost region of the Seward Peninsula north of Nome, Alaska. Through observations, experiments, and synthesis with existing datasets, NGEE Arctic provided an enhanced knowledge base for multi-scale modeling and contributed to improved process representation at global pan-Arctic scales within the Department of Energy's Earth system Model (the Energy Exascale Earth System Model, or E3SM), and specifically within the E3SM Land Model component (ELM).
The latest National Statistics for England about the experience of patients in the NHS, produced by the Department of Health and the Care Quality Commission, in Excel and .csv format.
Full publications can be found in the patient experience statistics series.
Supporting documentation including a methodology paper is also available for this series.
<p class="gem-c-attachment_metadata"><span class="gem-c-attachment_attribute">MS Excel Spreadsheet</span>, <span class="gem-c-attachment_attribute">84 KB</span></p>
<p class="gem-c-attachment_metadata">This file may not be suitable for users of assistive technology.</p>
<details data-module="ga4-event-tracker" data-ga4-event='{"event_name":"select_content","type":"detail","text":"Request an accessible format.","section":"Request an accessible format.","index_section":1}' class="gem-c-details govuk-details govuk-!-margin-bottom-0" title="Request an accessible format.">
Request an accessible format.
If you use assistive technology (such as a screen reader) and need a version of this document in a more accessible format, please email <a href="mailto:publications@dhsc.gov.uk" target="_blank" class="govuk-link">publications@dhsc.gov.uk</a>. Please tell us what format you need. It will help us if you say what assistive technology you use.
<p class="gem-c-attachment_metadata"><span class="gem-c-attachment_attribute">MS Excel Spreadsheet</span>, <span class="gem-c-attachment_attribute">5.78 KB</span></p>
<p class="gem-c-attachment_metadata"><a class="govuk-link" aria-label="View Patient experience overall statistics: latest results online" href="/csv-preview/5a7b5374e5274a34770eaefc/results_csv_format.csv">View online</a></p>
<p class="gem-c-attachment_metadata">This file may not be suitable for users of assistive technology.</p>
<details data-module="ga4-event-trac
Sky Packets provides premium, U.S.-sourced mobile attribution data, mobile IP data, and rich 1st party data captured directly from our managed public WiFi networks. All data is 100% opt-in, fully privacy-compliant, and delivered in clean CSV format (JSON available for some locations) for seamless integration into your analytics and targeting platforms.
Operating advanced connectivity infrastructure across high-foot-traffic urban locations in the United States, Sky Packets transforms physical spaces into intelligent data environments. Our networks are equipped with cutting-edge WiFi, mmWave, and private LTE technologies, enabling highly accurate, real-time user insights.
Data buyers gain access to valuable behavioral and location-based signals, verified and structured for immediate use in marketing attribution, foot traffic analysis, audience segmentation, and other high-value applications. Because the data originates from secure, first-party environments, buyers can trust its quality, compliance, and relevance.
Whether you're powering ad tech, retail intelligence, or urban planning tools, Sky Packets delivers a transparent and scalable data pipeline designed for today’s most sophisticated data ecosystems.
https://joinup.ec.europa.eu/page/eupl-text-11-12https://joinup.ec.europa.eu/page/eupl-text-11-12
CY-Bench is a dataset and benchmark for subnational crop yield forecasting, with coverage of major crop growing countries of the world for maize and wheat. By subnational, we mean the administrative level where yield statistics are published. When statistics are available for multiple levels, we pick the highest resolution. The dataset combines sub-national yield statistics with relevant predictors, such as growing-season weather indicators, remote sensing indicators, evapotranspiration, soil moisture indicators, and static soil properties. CY-Bench has been designed and curated by agricultural experts, climate scientists, and machine learning researchers from the AgML Community, with the aim of facilitating model intercomparison across the diverse agricultural systems around the globe in conditions as close as possible to real-world operationalization. Ultimately, by lowering the barrier to entry for ML researchers in this crucial application area, CY-Bench will facilitate the development of improved crop forecasting tools that can be used to support decision-makers in food security planning worldwide.
* Crops : Wheat & Maize
* Spatial Coverage : Wheat (29 countries), Maize (38).
See CY-Bench paper appendix for the list of countries.
* Temporal Coverage : Varies. See country-specific data
The benchmark data is organized as a collection of CSV files, with each file representing a specific category of variable for a particular country. Each CSV file is named according to the category and the country it pertains to, facilitating easy identification and retrieval. The data within each CSV file is structured in tabular format, where rows represent observations and columns represent different predictors related to a category of variable.
All data files are provided as .csv.
Data | Description | Variables (units) | Temporal Resolution | Data Source (Reference) |
crop_calendar | Start and end of growing season | sos (day of the year), eos (day of the year) | Static | World Cereal (Franch et al, 2022) |
fpar | fraction of absorbed photosynthetically active radiation | fpar (%) | Dekadal (3 times a month; 1-10, 11-20, 21-31) | European Commission's Joint Research Centre (EC-JRC, 2024) |
ndvi | normalized difference vegetation index | - | approximately weekly | MOD09CMG (Vermote, 2015) |
meteo | temperature, precipitation (prec), radiation, potential evapotranspiration (et0), climatic water balance (= prec - et0) | tmin (C), tmax (C), tavg (C), prec (mm0, et0 (mm), cwb (mm), rad (J m-2 day-1) | daily | AgERA5 (Boogaard et al, 2022), FAO-AQUASTAT for et0 (FAO-AQUASTAT, 2024) |
soil_moisture | surface soil moisture, rootzone soil moisture | ssm (kg m-2), rsm (kg m-2) | daily | GLDAS (Rodell et al, 2004) |
soil | available water capacity, bulk density, drainage class | awc (c m-1), bulk_density (kg dm-3), drainage class (category) | static | WISE Soil database (Batjes, 2016) |
yield | end-of-season yield | yield (t ha-1) | yearly | Various country or region specific sources (see crop_statistics_... in https://github.com/BigDataWUR/AgML-CY-Bench/tree/main/data_preparation) |
The CY-Bench dataset has been structure at first level by crop type and subsequently by country. For each country, the folder name follows the ISO 3166-1 alpha-2 two-character code. A separate .csv is available for each predictor data and crop calendar as shown below. The csv files are named to reflect the corresponding country and crop type e.g. **variable_croptype_country.csv**.
```
CY-Bench
│
└─── maize
│ │
│ └─── AO
│ │ -- crop_calendar_maize_AO.csv
│ │ -- fpar_maize_AO.csv
│ │ -- meteo_maize_AO.csv
│ │ -- ndvi_maize_AO.csv
│ │ -- soil_maize_AO.csv
│ │ -- soil_moisture_maize_AO.csv
│ │ -- yield_maize_AO.csv
│ │
│ └─── AR
│ -- crop_calendar_maize_AR.csv
│ -- fpar_maize_AR.csv
│ -- ...
│
└─── wheat
│ │
│ └─── AR
│ │ -- crop_calendar_wheat_AR.csv
│ │ -- fpar_wheat_AR.csv
│ │ ...
```
```
X
└─── crop_calendar_maize_X.csv
│ -- crop_name (name of the crop)
│ -- adm_id (unique identifier for a subnational unit)
│ -- sos (start of crop season)
│ -- eos (end of crop season)
│
└─── fpar_maize_X.csv
│ -- crop_name
│ -- adm_id
│ -- date (in the format YYYYMMdd)
│ -- fpar
│
└─── meteo_maize_X.csv
│ -- crop_name
│ -- adm_id
│ -- date (in the format YYYYMMdd)
│ -- tmin (minimum temperature)
│ -- tmax (maximum temperature)
│ -- prec (precipitation)
│ -- rad (radiation)
│ -- tavg (average temperature)
│ -- et0 (evapotranspiration)
│ -- cwb (crop water balance)
│
└─── ndvi_maize_X.csv
│ -- crop_name
│ -- adm_id
│ -- date (in the format YYYYMMdd)
│ -- ndvi
│
└─── soil_maize_X.csv
│ -- crop_name
│ -- adm_id
│ -- awc (available water capacity)
│ -- bulk_density
│ -- drainage_class
│
└─── soil_moisture_maize_X.csv
│ -- crop_name
│ -- adm_id
│ -- date (in the format YYYYMMdd)
│ -- ssm (surface soil moisture)
│ -- rsm ()
│
└─── yield_maize_X.csv
│ -- crop_name
│ -- country_code
│ -- adm_id
│ -- harvest_year
│ -- yield
│ -- harvest_area
│ -- production
The full dataset can be downloaded directly from Zenodo or using the ```zenodo_get``` library
We kindly ask all users of CY-Bench to properly respect licensing and citation conditions of the datasets included.
This dataset was created by Ivan Mikhnenkov
It contains the following files:
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Sample data for exercises in Further Adventures in Data Cleaning.
When you need to analyze crypto market history, batch processing often beats streaming APIs. That's why we built the Flat Files S3 API - giving analysts and researchers direct access to structured historical cryptocurrency data without the integration complexity of traditional APIs.
Pull comprehensive historical data across 800+ cryptocurrencies and their trading pairs, delivered in clean, ready-to-use CSV formats that drop straight into your analysis tools. Whether you're building backtest environments, training machine learning models, or running complex market studies, our flat file approach gives you the flexibility to work with massive datasets efficiently.
Why work with us?
Market Coverage & Data Types: - Comprehensive historical data since 2010 (for chosen assets) - Comprehensive order book snapshots and updates - Trade-by-trade data
Technical Excellence: - 99,9% uptime guarantee - Standardized data format across exchanges - Flexible Integration - Detailed documentation - Scalable Architecture
CoinAPI serves hundreds of institutions worldwide, from trading firms and hedge funds to research organizations and technology providers. Our S3 delivery method easily integrates with your existing workflows, offering familiar access patterns, reliable downloads, and straightforward automation for your data team. Our commitment to data quality and technical excellence, combined with accessible delivery options, makes us the trusted choice for institutions that demand both comprehensive historical data and real-time market intelligence
== Quick starts ==
Batch export podcast metadata to CSV files:
1) Export by search keyword: https://www.listennotes.com/podcast-datasets/keyword/
2) Export by category: https://www.listennotes.com/podcast-datasets/category/
== Quick facts ==
The most up-to-date and comprehensive podcast database available All languages & All countries Includes over 3,500,000 podcasts Features 35+ data fields , such as basic metadata, global rank, RSS feed (with audio URLs), Spotify links, and more Delivered in CSV format
== Data Attributes ==
See the full list of data attributes on this page: https://www.listennotes.com/podcast-datasets/fields/?filter=podcast_only
How to access podcast audio files: Our dataset includes RSS feed URLs for all podcasts. You can retrieve audio for over 170 million episodes directly from these feeds. With access to the raw audio, you’ll have high-quality podcast speech data ideal for AI training and related applications.
== Custom Offers ==
We can provide custom datasets based on your needs, such as language-specific data, daily/weekly/monthly update frequency, or one-time purchases.
We also provide a RESTful API at PodcastAPI.com
Contact us: hello@listennotes.com
== Need Help? ==
If you have any questions about our products, feel free to reach out hello@listennotes.com
== About Listen Notes, Inc. ==
Since 2017, Listen Notes, Inc. has provided the leading podcast search engine and podcast database.
Government Equalities Office spend data June 2012 (CSV format)
Date: Thu Oct 04 11:11:01 BST 2012
Measurements of full-range (350–2500 nm) canopy spectral reflectance of Arctic plant species, plots, and transects at the Next Generation Ecosystem Experiment Arctic (NGEE Arctic) Teller Mile Marker (MM27) and Kougarok Fire Complex (KFC) sites, Seward Peninsula, Alaska. Spectra were collected in July 2022 using a handheld SVC HR-2014i spectroradiometer. All spectra were collected as calibrated surface radiance and converted to surface reflectance using 99.99% reflective Spectralon white reference standard. This data package includes unprocessed instrument output of the spectra signals (.sig) and, for some canopy measurements, photographs of the target taken by the SVC instrument camera or handheld digital camera (.jpg), GPS locations and file metadata (.csv). The Next-Generation Ecosystem Experiments: Arctic (NGEE Arctic), was a 15-year research effort (2012-2027) to reduce uncertainty in Earth System Models by developing a predictive understanding of carbon-rich Arctic ecosystems and feedbacks to climate. NGEE Arctic was supported by the Department of Energy's Office of Biological and Environmental Research. The NGEE Arctic project had two field research sites: 1) located within the Arctic polygonal tundra coastal region on the Barrow Environmental Observatory (BEO) and the North Slope near Utqiagvik (Barrow), Alaska and 2) multiple areas on the discontinuous permafrost region of the Seward Peninsula north of Nome, Alaska. Through observations, experiments, and synthesis with existing datasets, NGEE Arctic provided an enhanced knowledge base for multi-scale modeling and contributed to improved process representation at global pan-Arctic scales within the Department of Energy's Earth system Model (the Energy Exascale Earth System Model, or E3SM), and specifically within the E3SM Land Model component (ELM).
Want to save VCF files in Excel? Then try CubexSoft vCard to CSV Converter Tool. The tool helps to change format of multiple VCF files into CSV files at once. Also, it is easy very simple to understand this tool’s functions, without requiring any tech-skill. The app is eligible to convert vCard from Android, Apple Phone, Computer, Smartphone, etc. Users can launch this VCF to CSV Tool on Windows Operating Systems. And for demo purpose, it let users to change 5 .vcf files to .csv files free of… See the full description on the dataset page: https://huggingface.co/datasets/lawrwtwinkle111/vcard-to-csv-converter.
This dataset is an extract from COVID-19 Open Research Dataset Challenge (CORD-19).
This pre-process is neccesary since the original input data it is stored in JSON files, whose structure is likely too complex to directly perform the analysis.
The preprocessing of the data further consisted of filtering the documents that specifically talk about the covid-19 disease and its other names, among other general data review and cleaning activities.
As a result, this dataset contains a set of files in csv format, grouped into original sources (Biorxiv, Comm_use, Custom_licence, Nomcomm_use). Each of those files contains a subset of data columns, specifically paper_id, doc_title, doc_text, and source.
Full-range (350 - 2500 nm) leaf and canopy reflectance spectra of various Arctic tundra ecosystem endmembers, including species-level leaf reflectance, canopy-scale species endmember spectra, plot-scale spectra, and transect spectra as well as non-vegetated surface (NVS) spectra. The datasets were collected at the three core NGEE-Arctic watersheds, Kougarok, Teller, and Council within the larger Seward Peninsula, Alaska region. The data were collected in the months of July and August of 2017 using a full-range Spectra Vista Corporation (SVC) HR-1024i spectroradiometer. Leaf-level spectra were collected with the original SVC leaf clip/plant probe connected to the spectrometer through a 1.15 meter long fiber optic cable, while canopy-scale reflectance was collected with an 8-degree field-of-view (FOV) foreoptic lens. All spectral measurements were collected as calibrated surface radiance and converted to surface reflectance using a 99.99% reflective Spectralon white reference standard. For those canopy spectra collected with associated functional trait data, the FOV of the instrument was positioned to include the same leaves harvested for functional trait measurements, including leaf mass per area (LMA) and foliar carbon and nitrogen content (see associated dataset). This data package includes 26 files in a variety of formats including processed canopy and leaf spectra (.csv), processed dGPS locations (.csv and .kmz), digital photographs of spectral targets (.jpg) and raw data from spectroradiometer and dGPS instruments (compressed as tar.gz). Metadata files include data descriptions (_dd.csv) for tabular data and a key to species symbols used in data files. All included files are listed and described in NGA110_flmd.csv. The Next-Generation Ecosystem Experiments: Arctic (NGEE Arctic), was a research effort to reduce uncertainty in Earth System Models by developing a predictive understanding of carbon-rich Arctic ecosystems and feedbacks to climate. NGEE Arctic was supported by the Department of Energy's Office of Biological and Environmental Research. The NGEE Arctic project had two field research sites: 1) located within the Arctic polygonal tundra coastal region on the Barrow Environmental Observatory (BEO) and the North Slope near Utqiagvik (Barrow), Alaska and 2) multiple areas on the discontinuous permafrost region of the Seward Peninsula north of Nome, Alaska. Through observations, experiments, and synthesis with existing datasets, NGEE Arctic provided an enhanced knowledge base for multi-scale modeling and contributed to improved process representation at global pan-Arctic scales within the Department of Energy's Earth system Model (the Energy Exascale Earth System Model, or E3SM), and specifically within the E3SM Land Model component (ELM).
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset contains detailed information about all cards available in the Pokémon Trading Card Game Pocket mobile app. The data has been carefully curated and cleaned to provide Pokémon enthusiasts and developers with accurate and comprehensive card information.
,
)Column | Description | Example |
---|---|---|
set_name | Full name of the card set | "Eevee Grove" |
set_code | Official set identifier | "a3b" |
set_release_date | Set release date | "June 26, 2025" |
set_total_cards | Total cards in the set | 107 |
pack_name | Name of the specific pack | "Eevee Grove" |
card_name | Full card name | "Leafeon" |
card_number | Card number within set | "2" |
card_rarity | Rarity classification | "Rare" |
card_type | Card type category | "Pokémon" |
If you find this dataset useful, consider giving it an upvote — it really helps others discover it too! 🔼😊
Happy analyzing! 🎯📊