45 datasets found
  1. o

    Free Data

    • optiondata.org
    Updated Sep 3, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2022). Free Data [Dataset]. https://optiondata.org/
    Explore at:
    Dataset updated
    Sep 3, 2022
    License

    https://optiondata.org/about.htmlhttps://optiondata.org/about.html

    Time period covered
    Jan 1, 2013 - Jun 30, 2013
    Description

    Free historical options data, dataset files in CSV format.

  2. o

    Datasets in 2012 to 2024

    • optiondata.org
    Updated Sep 3, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2022). Datasets in 2012 to 2024 [Dataset]. https://optiondata.org/
    Explore at:
    Dataset updated
    Sep 3, 2022
    License

    https://optiondata.org/about.htmlhttps://optiondata.org/about.html

    Description

    Historical option data in 2019 to 2021, dataset files in CSV format.

  3. o

    Datasets in the last 24 years

    • optiondata.org
    Updated Sep 3, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2022). Datasets in the last 24 years [Dataset]. https://optiondata.org/
    Explore at:
    Dataset updated
    Sep 3, 2022
    License

    https://optiondata.org/about.htmlhttps://optiondata.org/about.html

    Time period covered
    May 1, 2002 - Present
    Description

    Historical option data in the last 24 years, dataset files in CSV format.

  4. o

    Datasets in 2024

    • optiondata.org
    Updated Sep 3, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2022). Datasets in 2024 [Dataset]. https://optiondata.org/
    Explore at:
    Dataset updated
    Sep 3, 2022
    License

    https://optiondata.org/about.htmlhttps://optiondata.org/about.html

    Time period covered
    Jan 1, 2024 - Dec 31, 2024
    Description

    Historical option EOD data in 2021, dataset files in CSV format.

  5. US Options Data Packages for Trading, Research, Education & Sentiment

    • datarade.ai
    Updated Dec 6, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Intrinio (2021). US Options Data Packages for Trading, Research, Education & Sentiment [Dataset]. https://datarade.ai/data-products/us-options-data-packages-for-trading-research-education-s-intrinio
    Explore at:
    Dataset updated
    Dec 6, 2021
    Dataset authored and provided by
    Intrinio
    Area covered
    United States of America
    Description

    We offer three easy-to-understand packages to fit your business needs. Visit intrinio.com/pricing to compare packages.

    Bronze

    The Bronze package is ideal for developing your idea and prototyping your platform with high-quality EOD options prices sourced from OPRA.

    When you’re ready for launch, it’s a seamless transition to our Silver package for delayed options prices, Greeks and implied volatility, and unusual options activity, plus delayed equity prices.

    • Latest EOD OPRA options prices

    Exchange Fees & Requirements:

    This package requires no paperwork or exchange fees.

    Bronze Benefits:

    • Web API access
    • 300 API calls/minute limit
    • File downloads
    • Unlimited internal users
    • Unlimited internal & external display
    • Built-in ticketing system
    • Live chat & email support

    Silver

    The Silver package is ideal for clients that want delayed options data for their platform, or for startups in the development and testing phase. You’ll get 15-minute delayed options data, Greeks, implied volatility, and unusual options activity, plus the latest EOD options prices and delayed equity prices.

    You can easily move up to the Gold package for real-time options and equity prices, additional access methods, and premium support options.

    • 15-minute delayed OPRA options prices, Greeks & IV
    • 15-minute delayed OPRA unusual options activity
    • Latest EOD OPRA options prices
    • 15-minute delayed equity prices
    • Underlying security reference data

    Exchange Fees & Requirements:

    If you subscribe to the Silver package and will not display the data outside of your firm, you’ll need to fill out a simplified exchange agreement and send it back to us. There are no exchange fees and we can provide immediate access to the data.

    If you subscribe to the Silver package and will display the data outside of your firm, we’ll work with your team to submit the correct paperwork to OPRA for approval. Once approved, OPRA will bill exchange fees directly to your firm – typically $600-$2000/month depending on your use case. These fees are the same no matter what data provider you use. Per-user reporting is not required, so there are no variable per user fees.

    Silver Benefits:

    • Assistance with OPRA paperwork
    • Web API access
    • 2,000 API calls/minute limit
    • File downloads
    • Access to third-party datasets via Intrinio API (additional fees required)
    • Unlimited internal users
    • Unlimited internal & external display
    • Built-in ticketing system
    • Live chat & email support
    • Concierge customer success team
    • Comarketing & promotional initiatives

    Gold

    The Gold package is ideal for funded companies that are in the growth or scaling stage, as well as institutions that are innovating within the fintech space. This full-service solution offers real-time options prices, Greeks and implied volatility, and unusual options activity, as well as the latest EOD options prices and real-time equity prices.

    You’ll also have access to our wide range of modern access methods, third-party data via Intrinio’s API with licensing assistance, support from our team of expert engineers, custom delivery architectures, and much more.

    • Real-time OPRA options prices, Greeks & IV
    • Real-time OPRA unusual options activity
    • Latest EOD OPRA options prices
    • Real-time equity prices
    • Underlying security reference data

    Exchange Fees & Requirements:

    If you subscribe to the Gold package, we’ll work with your team to submit the correct paperwork to OPRA for approval. Once approved, OPRA will bill exchange fees directly to your firm – typically $600-$2000/month depending on your use case. These fees are the same no matter what data provider you use. Per-user reporting is required, with an associated variable per user fee.

    Gold Benefits:

    • Assistance with OPRA paperwork
    • Web API access
    • 2,000 API calls/minute limit
    • WebSocket access (additional fee)
    • Customizable access methods (Snowflake, FTP, etc.)
    • Access to third-party datasets via Intrinio API (additional fees required)
    • Unlimited internal users
    • Unlimited internal & external display
    • Built-in ticketing system
    • Live chat & email support
    • Concierge customer success team
    • Comarketing & promotional initiatives
    • Access to engineering team

    Platinum

    Don’t see a package that fits your needs? Our team can design a premium custom package for your business.

  6. o

    Sample Data at 2022-08-24

    • optiondata.org
    Updated Sep 3, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2022). Sample Data at 2022-08-24 [Dataset]. https://optiondata.org/
    Explore at:
    Dataset updated
    Sep 3, 2022
    License

    https://option.discount/privacy.htmlhttps://option.discount/privacy.html

    Time period covered
    Aug 24, 2022
    Description

    Historical option sample data at 2022-08-24, dataset files in CSV format.

  7. h

    options-IV-SP500

    • huggingface.co
    Updated Oct 14, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Juan Pablo (2019). options-IV-SP500 [Dataset]. https://huggingface.co/datasets/gauss314/options-IV-SP500
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Oct 14, 2019
    Authors
    Juan Pablo
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    Downloading the Options IV SP500 Dataset

    This document will guide you through the steps to download the Options IV SP500 dataset from Hugging Face Datasets. This dataset includes data on the options of the S&P 500, including implied volatility. To start, you'll need to install Hugging Face's datasets library if you haven't done so already. You can do this using the following pip command: !pip install datasets

    Here's the Python code to load the Options IV SP500 dataset from Hugging… See the full description on the dataset page: https://huggingface.co/datasets/gauss314/options-IV-SP500.

  8. T

    FX Options Market Data

    • traditiondata.com
    • staging.traditiondata.com
    csv, pdf
    Updated Feb 8, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    TraditionData (2023). FX Options Market Data [Dataset]. https://www.traditiondata.com/products/fx-options/
    Explore at:
    csv, pdfAvailable download formats
    Dataset updated
    Feb 8, 2023
    Dataset authored and provided by
    TraditionData
    License

    https://www.traditiondata.com/terms-conditions/https://www.traditiondata.com/terms-conditions/

    Description

    TraditionData’s FX Options Market Data service provides comprehensive information on FX options markets, leveraging the Volbroker platform for transparency and efficiency.

    • Offers real-time volatility price transparency in ATM Straddles, Delta Risk Reversals, and Butterflies.
    • Suitable for traders, risk managers, or portfolio managers managing currency risk and maximizing returns.

    Visit FX Options Market Data for more information.

  9. F

    CBOE Volatility Index: VIX

    • fred.stlouisfed.org
    json
    Updated Sep 18, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). CBOE Volatility Index: VIX [Dataset]. https://fred.stlouisfed.org/series/VIXCLS
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Sep 18, 2025
    License

    https://fred.stlouisfed.org/legal/#copyright-citation-requiredhttps://fred.stlouisfed.org/legal/#copyright-citation-required

    Description

    Graph and download economic data for CBOE Volatility Index: VIX (VIXCLS) from 1990-01-02 to 2025-09-17 about VIX, volatility, stock market, and USA.

  10. g

    Development Economics Data Group - Download Options Score | gimi9.com

    • gimi9.com
    Updated May 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Development Economics Data Group - Download Options Score | gimi9.com [Dataset]. https://gimi9.com/dataset/worldbank_wb_spi_d2_2_download_options/
    Explore at:
    Dataset updated
    May 7, 2025
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Download Options Score from Open Data Watch. Openness element 3 measures whether data are available with three different download options: bulk download, API, and user-select options. A bulk download is defined at the indicator level as: The ability to download all data recorded in Open Data Inventory (ODIN)for a particular indicator (all years, disaggregations, and subnational data) in one file, or multiple files that can be downloaded simultaneously. Bulk downloads are a key component of the Open Definition, which requires data to be “provided as a whole . . . and downloadable via the internet.” User-selectable download options are defined as: Users must be able to select an indicator and at least one other dimension to create a download or table. These dimensions could include time periods, geographic disaggregations, or other recommended disaggregations. An option to choose the file export format is not enough. API stands for Application Programming Interface. Ideally, APIs should be clearly displayed on the website. ODIN assumes APIs are available for the NSOs entire data collection used in ODIN, unless clearly stated. ODIN assessors do not register for use or test API functionality. For more information on APIs, see this guide. Scores are given by data category, not indicator.

  11. F

    ICE BofA US Corporate Index Option-Adjusted Spread

    • fred.stlouisfed.org
    json
    Updated Sep 18, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). ICE BofA US Corporate Index Option-Adjusted Spread [Dataset]. https://fred.stlouisfed.org/series/BAMLC0A0CM
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Sep 18, 2025
    License

    https://fred.stlouisfed.org/legal/#copyright-pre-approvalhttps://fred.stlouisfed.org/legal/#copyright-pre-approval

    Area covered
    United States
    Description

    Graph and download economic data for ICE BofA US Corporate Index Option-Adjusted Spread (BAMLC0A0CM) from 1996-12-31 to 2025-09-17 about option-adjusted spread, corporate, and USA.

  12. F

    CBOE S&P 500 3-Month Volatility Index

    • fred.stlouisfed.org
    json
    Updated Sep 19, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). CBOE S&P 500 3-Month Volatility Index [Dataset]. https://fred.stlouisfed.org/series/VXVCLS
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Sep 19, 2025
    License

    https://fred.stlouisfed.org/legal/#copyright-citation-requiredhttps://fred.stlouisfed.org/legal/#copyright-citation-required

    Description

    Graph and download economic data for CBOE S&P 500 3-Month Volatility Index (VXVCLS) from 2007-12-04 to 2025-09-18 about VIX, volatility, stock market, 3-month, and USA.

  13. OpenCon Application Data

    • figshare.com
    txt
    Updated Jun 4, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    OpenCon 2015; SPARC; Right to Research Coalition (2023). OpenCon Application Data [Dataset]. http://doi.org/10.6084/m9.figshare.1512496.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jun 4, 2023
    Dataset provided by
    Figsharehttp://figshare.com/
    figshare
    Authors
    OpenCon 2015; SPARC; Right to Research Coalition
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    OpenCon 2015 Application Open Data

    The purpose of this document is to accompany the public release of data collected from OpenCon 2015 applications.Download & Technical Information The data can be downloaded in CSV format from GitHub here: https://github.com/RightToResearch/OpenCon-2015-Application-Data The file uses UTF8 encoding, comma as field delimiter, quotation marks as text delimiter, and no byte order mark.

    License and Requests

    This data is released to the public for free and open use under a CC0 1.0 license. We have a couple of requests for anyone who uses the data. First, we’d love it if you would let us know what you are doing with it, and share back anything you develop with the OpenCon community (#opencon / @open_con ). Second, it would also be great if you would include a link to the OpenCon 2015 website (www.opencon2015.org) wherever the data is used. You are not obligated to do any of this, but we’d appreciate it!

    Data Fields

    Unique ID

    This is a unique ID assigned to each applicant. Numbers were assigned using a random number generator.

    Timestamp

    This was the timestamp recorded by google forms. Timestamps are in EDT (Eastern U.S. Daylight Time). Note that the application process officially began at 1:00pm EDT June 1 ended at 6:00am EDT on June 23. Some applications have timestamps later than this date, and this is due to a variety of reasons including exceptions granted for technical difficulties, error corrections (which required re-submitting the form), and applications sent in via email and later entered manually into the form. [a]

    Gender

    Mandatory. Choose one from list or fill-in other. Options provided: Male, Female, Other (fill in).

    Country of Nationality

    Mandatory. Choose one option from list.

    Country of Residence

    Mandatory. Choose one option from list.

    What is your primary occupation?

    Mandatory. Choose one from list or fill-in other. Options provided: Undergraduate student; Masters/professional student; PhD candidate; Faculty/teacher; Researcher (non-faculty); Librarian; Publisher; Professional advocate; Civil servant / government employee; Journalist; Doctor / medical professional; Lawyer; Other (fill in).

    Select the option below that best describes your field of study or expertise

    Mandatory. Choose one option from list.

    What is your primary area of interest within OpenCon’s program areas?

    Mandatory. Choose one option from list. Note: for the first approximately 24 hours the options were listed in this order: Open Access, Open Education, Open Data. After that point, we set the form to randomize the order, and noticed an immediate shift in the distribution of responses.

    Are you currently engaged in activities to advance Open Access, Open Education, and/or Open Data?

    Mandatory. Choose one option from list.

    Are you planning to participate in any of the following events this year?

    Optional. Choose all that apply from list. Multiple selections separated by semi-colon.

    Do you have any of the following skills or interests?

    Mandatory. Choose all that apply from list or fill-in other. Multiple selections separated by semi-colon. Options provided: Coding; Website Management / Design; Graphic Design; Video Editing; Community / Grassroots Organizing; Social Media Campaigns; Fundraising; Communications and Media; Blogging; Advocacy and Policy; Event Logistics; Volunteer Management; Research about OpenCon's Issue Areas; Other (fill-in).

    Data Collection & Cleaning

    This data consists of information collected from people who applied to attend OpenCon 2015. In the application form, questions that would be released as Open Data were marked with a caret (^) and applicants were asked to acknowledge before submitting the form that they understood that their responses to these questions would be released as such. The questions we released were selected to avoid any potentially sensitive personal information, and to minimize the chances that any individual applicant can be positively identified. Applications were formally collected during a 22 day period beginning on June 1, 2015 at 13:00 EDT and ending on June 23 at 06:00 EDT. Some applications have timestamps later than this date, and this is due to a variety of reasons including exceptions granted for technical difficulties, error corrections (which required re-submitting the form), and applications sent in via email and later entered manually into the form. Applications were collected using a Google Form embedded at http://www.opencon2015.org/attend, and the shortened bit.ly link http://bit.ly/AppsAreOpen was promoted through social media. The primary work we did to clean the data focused on identifying and eliminating duplicates. We removed all duplicate applications that had matching e-mail addresses and first and last names. We also identified a handful of other duplicates that used different e-mail addresses but were otherwise identical. In cases where duplicate applications contained any different information, we kept the information from the version with the most recent timestamp. We made a few minor adjustments in the country field for cases where the entry was obviously an error (for example, electing a country listed alphabetically above or below the one indicated elsewhere in the application). We also removed one potentially offensive comment (which did not contain an answer to the question) from the Gender field and replaced it with “Other.”

    About OpenCon

    OpenCon 2015 is the student and early career academic professional conference on Open Access, Open Education, and Open Data and will be held on November 14-16, 2015 in Brussels, Belgium. It is organized by the Right to Research Coalition, SPARC (The Scholarly Publishing and Academic Resources Coalition), and an Organizing Committee of students and early career researchers from around the world. The meeting will convene students and early career academic professionals from around the world and serve as a powerful catalyst for projects led by the next generation to advance OpenCon's three focus areas—Open Access, Open Education, and Open Data. A unique aspect of OpenCon is that attendance at the conference is by application only, and the majority of participants who apply are awarded travel scholarships to attend. This model creates a unique conference environment where the most dedicated and impactful advocates can attend, regardless of where in the world they live or their access to travel funding. The purpose of the application process is to conduct these selections fairly. This year we were overwhelmed by the quantity and quality of applications received, and we hope that by sharing this data, we can better understand the OpenCon community and the state of student and early career participation in the Open Access, Open Education, and Open Data movements.

    Questions

    For inquires about the OpenCon 2015 Application data, please contact Nicole Allen at nicole@sparc.arl.org.

  14. d

    Gamma Exposure (GEX) measurement of US stocks and options by Trading...

    • datarade.ai
    .json, .csv
    Updated Feb 4, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Trading Volatility (2021). Gamma Exposure (GEX) measurement of US stocks and options by Trading Volatility [Dataset]. https://datarade.ai/data-products/gamma-exposure-gex-measurement-of-us-companies-and-indexes-tickerized-trading-volatility
    Explore at:
    .json, .csvAvailable download formats
    Dataset updated
    Feb 4, 2021
    Dataset authored and provided by
    Trading Volatility
    Area covered
    Canada, United States of America
    Description

    Our proprietary Skew-Adjusted Gamma Exposure measurements make adjustments to Naive GEX calculations to more accurately reflect actual gamma positioning of Market Makers who employ delta-hedging strategies. When Market Makers carry substantial negative gamma a security will often "over-react" to fundamental news. Conversely, when MMs carry substantial positive gamma a security will often "under-react" to news. Our data includes a quantified segmentation of a security's gamma distribution across all option strikes as well as across relevant expiration dates. Our website provides numerical, graphical, and historical views of all gamma data in our database. Additionally, our API access allows for easy download of csv files or import into Excel for further analysis and custom applications.

  15. Drought and Moisture Surplus for the Conterminous United States, Annual Data...

    • catalog.data.gov
    • resilience.climate.gov
    • +13more
    Updated Apr 21, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Forest Service (2025). Drought and Moisture Surplus for the Conterminous United States, Annual Data 5-Year Windows (Image Service) [Dataset]. https://catalog.data.gov/dataset/drought-and-moisture-surplus-for-the-conterminous-united-states-annual-data-5-year-windows-b4ca4
    Explore at:
    Dataset updated
    Apr 21, 2025
    Dataset provided by
    U.S. Department of Agriculture Forest Servicehttp://fs.fed.us/
    Area covered
    Contiguous United States, United States
    Description

    Note: To download this raster dataset, go to ArcGIS Open Data Set and click the download button, and under additional resources select raster download option; the data can also be downloaded directly from the FSGeodata Clearinghouse. To summarize this dataset by U.S. Forest Service Lands, see the Drought Summary Tool. You can also explore cumulative drought and moisture changes from this StoryMap; additional drought products from the Office of Sustainability and Climate are available in our Climate Gallery and the OSC Drought page.The Moisture Deficit and Surplus map uses moisture difference z-score datasets developed by scientists Frank Koch, John Coulston, and William Smith of the Forest Service Southern Research Station. A z-score is a statistical method for assessing how different a value is from the mean (average). Mean moisture values were derived from historical data on precipitation and potential evapotranspiration, from 1900 to 2023. The greater the z-value, the larger the departure from average conditions, indicating larger moisture deficits or surpluses. Thus, the dark red areas on this map indicate a five-year period with extremely dry conditions, relative to the average conditions over the past century. For further reading on the methodology used to build these maps, see the publication here: https://www.fs.usda.gov/treesearch/pubs/43361

  16. f

    The codes and data for "A Graph Convolutional Neural Network-based Method...

    • figshare.com
    txt
    Updated Jan 14, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    FirstName LastName (2025). The codes and data for "A Graph Convolutional Neural Network-based Method for Predicting Computational Intensity of Geocomputation" [Dataset]. http://doi.org/10.6084/m9.figshare.28200623.v2
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jan 14, 2025
    Dataset provided by
    figshare
    Authors
    FirstName LastName
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    A Graph Convolutional Neural Network-based Method for Predicting Computational Intensity of GeocomputationThis is the implementation for the paper "A Graph Convolutional Neural Network-based Method for Predicting Computational Intensity of Geocomputation".The framework is Learning-based Computing Framework for Geospatial data(LCF-G).Prediction, ParallelComputation and SampleGeneration.This paper includes three case studies, each corresponding to a folder. Each folder contains four subfolders: data, CIThe data folder contains geospatail data.The CIPrediction folder contains model training code.The ParallelComputation folder contains geographic computation code.The SampleGeneration folder contains code for sample generation.Case 1: Generation of DEM from point cloud datastep 1: Data downloadDataset 1 has been uploaded to the directory 1point2dem/data. The other two datasets, Dataset 2 and Dataset 3, can be downloaded from the following website:OpenTopographyBelow are the steps for downloading Dataset 2 and Dataset 3, along with the query parameters:Dataset 2:Visit OpenTopography Website: Go to Dataset 2 Download Link.https://portal.opentopography.org/lidarDataset?opentopoID=OTLAS.112018.2193.1Coordinates & Classification:In the section "1. Coordinates & Classification", select the option "Manually enter selection coordinates".Set the coordinates as follows: Xmin = 1372495.692761,Ymin = 5076006.86821,Xmax = 1378779.529766,Ymax = 5085586.39531Point Cloud Data Download:Under section "2. Point Cloud Data Download", choose the option "Point cloud data in LAS format".Submit:Click on "SUBMIT" to initiate the download.Dataset 3:Visit OpenTopography Website:Go to Dataset 3 Download Link: https://portal.opentopography.org/lidarDataset?opentopoID=OTLAS.052016.26912.1Coordinates & Classification:In the section "1. Coordinates & Classification", select the option "Manually enter selection coordinates".Set the coordinates as follows:Xmin = 470047.153826,Ymin = 4963418.512121,Xmax = 479547.16556,Ymax = 4972078.92768Point Cloud Data Download:Under section "2. Point Cloud Data Download", choose the option "Point cloud data in LAS format".Submit:Click on "SUBMIT" to initiate the download.step 2: Sample generationThis step involves data preparation, and samples can be generated using the provided code. Since the samples have already been uploaded to 1point2dem/SampleGeneration/data, this step is optional.cd 1point2dem/SampleGenerationg++ PointCloud2DEMSampleGeneration.cpp -o PointCloud2DEMSampleGenerationmpiexec -n {number_processes} ./PointCloud2DEMSampleGeneration ../data/pcd path/to/outputstep 3: Model trainingThis step involves training three models (GAN, CNN, GAT). The model results are saved in 1point2dem/SampleGeneration/result, and the results for Table 3 in the paper are derived from this output.cd 1point2dem/CIPredictionpython -u point_prediction.py --model [GCN|ChebNet|GATNet]step 4: Parallel computationThis step uses the trained models to optimize parallel computation. The results for Figures 11-13 in the paper are generated from the output of this command.cd 1point2dem/ParallelComputationg++ ParallelPointCloud2DEM.cpp -o ParallelPointCloud2DEMmpiexec -n {number_processes} ./ParallelPointCloud2DEM ../data/pcdCase 2: Spatial intersection of vector datastep 1: Data downloadSome data from the paper has been uploaded to 2intersection/data. The remaining OSM data can be downloaded from GeoFabrik. Below are the download steps and parameters:Directly click the following link to download the OSM data: GeoFabrik - Czech Republic OSM Datastep 2: Sample generationThis step involves data preparation, and samples can be generated using the provided code. Since the samples have already been uploaded to 2intersection/SampleGeneration/data, this step is optional.cd 2intersection/SampleGenerationg++ ParallelIntersection.cpp -o ParallelIntersectionmpiexec -n {number_processes} ./ParallelIntersection ../data/shpfile ../data/shpfilestep 3: Model trainingThis step involves training three models (GAN, CNN, GAT). The model results are saved in 2intersection/SampleGeneration/result, and the results for Table 5 in the paper are derived from this output.cd 2intersection/CIPredictionpython -u vector_prediction.py --model [GCN|ChebNet|GATNet]step 4: Parallel computationThis step uses the trained models to optimize parallel computation. The results for Figures 14-16 in the paper are generated from the output of this command.cd 2intersection/ParallelComputationg++ ParallelIntersection.cpp -o ParallelIntersectionmpiexec -n {number_processes} ./ParallelIntersection ../data/shpfile1 ../data/shpfile2Case 3: WOfS analysis using raster datastep 1: Data downloadSome data from the paper has been uploaded to 3wofs/data. The remaining data can be downloaded from http://openge.org.cn/advancedRetrieval?type=dataset. Below are the query parameters:Product Selection: Select LC08_L1TP and LC08_L1GTLatitude and Longitude Selection:Minimum Longitude: 112.5,Maximum Longitude: 115.5, Minimum Latitude: 29.5, Maximum Latitude: 31.5Time Range: 2013-01-01 to 2018-12-31Other parameters: Defaultstep 2: Sample generationThis step involves data preparation, and samples can be generated using the provided code. Since the samples have already been uploaded to 3wofs/SampleGeneration/data, this step is optional.cd 3wofs/SampleGenerationsbt packeagespark-submit --master {host1,host2,host3} --class whu.edu.cn.core.cube.raster.WOfSSampleGeneration path/to/package.jarstep 3: Model trainingThis step involves training three models (GAN, CNN, GAT). The model results are saved in 3wofs/SampleGeneration/result, and the results for Table 6 in the paper are derived from this output.cd 3wofs/CIPredictionpython -u raster_prediction.py --model [GCN|ChebNet|GATNet]step 4: Parallel computationThis step uses the trained models to optimize parallel computation. The results for Figures 18, 19 in the paper are generated from the output of this command.cd 3wofs/ParallelComputationsbt packeagespark-submit --master {host1,host2,host3} --class whu.edu.cn.core.cube.raster.WOfSOptimizedByDL path/to/package.jar path/to/outputStatement about Case 3The experiment Case 3 presented in this paper was conducted with improvements made on the GeoCube platform.Code Name: GeoCubeCode Link: GeoCube Source CodeLicense Information: The GeoCube project is openly available under the CC BY 4.0 license.The GeoCube project is licensed under CC BY 4.0, which is the Creative Commons Attribution 4.0 International License, allowing anyone to freely share, modify, and distribute the platform's code.Citation:Gao, Fan (2022). A multi-source spatio-temporal data cube for large-scale geospatial analysis. figshare. Software. https://doi.org/10.6084/m9.figshare.15032847.v1Clarification Statement:The authors of this code are not affiliated with this manuscript. The innovations and steps in Case 3, including data download, sample generation, and parallel computation optimization, were independently developed and are not dependent on the GeoCube’s code.RequirementsThe codes use the following dependencies with Python 3.8torch==2.0.0torch_geometric==2.5.3networkx==2.6.3pyshp==2.3.1tensorrt==8.6.1matplotlib==3.7.2scipy==1.10.1scikit-learn==1.3.0geopandas==0.13.2

  17. Data from: Ecosystem-Level Determinants of Sustained Activity in Open-Source...

    • zenodo.org
    application/gzip, bin +2
    Updated Aug 2, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Marat Valiev; Marat Valiev; Bogdan Vasilescu; James Herbsleb; Bogdan Vasilescu; James Herbsleb (2024). Ecosystem-Level Determinants of Sustained Activity in Open-Source Projects: A Case Study of the PyPI Ecosystem [Dataset]. http://doi.org/10.5281/zenodo.1419788
    Explore at:
    bin, application/gzip, zip, text/x-pythonAvailable download formats
    Dataset updated
    Aug 2, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Marat Valiev; Marat Valiev; Bogdan Vasilescu; James Herbsleb; Bogdan Vasilescu; James Herbsleb
    License

    https://www.gnu.org/licenses/old-licenses/gpl-2.0-standalone.htmlhttps://www.gnu.org/licenses/old-licenses/gpl-2.0-standalone.html

    Description
    Replication pack, FSE2018 submission #164:
    ------------------------------------------
    
    **Working title:** Ecosystem-Level Factors Affecting the Survival of Open-Source Projects: 
    A Case Study of the PyPI Ecosystem
    
    **Note:** link to data artifacts is already included in the paper. 
    Link to the code will be included in the Camera Ready version as well.
    
    
    Content description
    ===================
    
    - **ghd-0.1.0.zip** - the code archive. This code produces the dataset files 
     described below
    - **settings.py** - settings template for the code archive.
    - **dataset_minimal_Jan_2018.zip** - the minimally sufficient version of the dataset.
     This dataset only includes stats aggregated by the ecosystem (PyPI)
    - **dataset_full_Jan_2018.tgz** - full version of the dataset, including project-level
     statistics. It is ~34Gb unpacked. This dataset still doesn't include PyPI packages
     themselves, which take around 2TB.
    - **build_model.r, helpers.r** - R files to process the survival data 
      (`survival_data.csv` in **dataset_minimal_Jan_2018.zip**, 
      `common.cache/survival_data.pypi_2008_2017-12_6.csv` in 
      **dataset_full_Jan_2018.tgz**)
    - **Interview protocol.pdf** - approximate protocol used for semistructured interviews.
    - LICENSE - text of GPL v3, under which this dataset is published
    - INSTALL.md - replication guide (~2 pages)
    Replication guide
    =================
    
    Step 0 - prerequisites
    ----------------------
    
    - Unix-compatible OS (Linux or OS X)
    - Python interpreter (2.7 was used; Python 3 compatibility is highly likely)
    - R 3.4 or higher (3.4.4 was used, 3.2 is known to be incompatible)
    
    Depending on detalization level (see Step 2 for more details):
    - up to 2Tb of disk space (see Step 2 detalization levels)
    - at least 16Gb of RAM (64 preferable)
    - few hours to few month of processing time
    
    Step 1 - software
    ----------------
    
    - unpack **ghd-0.1.0.zip**, or clone from gitlab:
    
       git clone https://gitlab.com/user2589/ghd.git
       git checkout 0.1.0
     
     `cd` into the extracted folder. 
     All commands below assume it as a current directory.
      
    - copy `settings.py` into the extracted folder. Edit the file:
      * set `DATASET_PATH` to some newly created folder path
      * add at least one GitHub API token to `SCRAPER_GITHUB_API_TOKENS` 
    - install docker. For Ubuntu Linux, the command is 
      `sudo apt-get install docker-compose`
    - install libarchive and headers: `sudo apt-get install libarchive-dev`
    - (optional) to replicate on NPM, install yajl: `sudo apt-get install yajl-tools`
     Without this dependency, you might get an error on the next step, 
     but it's safe to ignore.
    - install Python libraries: `pip install --user -r requirements.txt` . 
    - disable all APIs except GitHub (Bitbucket and Gitlab support were
     not yet implemented when this study was in progress): edit
     `scraper/init.py`, comment out everything except GitHub support
     in `PROVIDERS`.
    
    Step 2 - obtaining the dataset
    -----------------------------
    
    The ultimate goal of this step is to get output of the Python function 
    `common.utils.survival_data()` and save it into a CSV file:
    
      # copy and paste into a Python console
      from common import utils
      survival_data = utils.survival_data('pypi', '2008', smoothing=6)
      survival_data.to_csv('survival_data.csv')
    
    Since full replication will take several months, here are some ways to speedup
    the process:
    
    ####Option 2.a, difficulty level: easiest
    
    Just use the precomputed data. Step 1 is not necessary under this scenario.
    
    - extract **dataset_minimal_Jan_2018.zip**
    - get `survival_data.csv`, go to the next step
    
    ####Option 2.b, difficulty level: easy
    
    Use precomputed longitudinal feature values to build the final table.
    The whole process will take 15..30 minutes.
    
    - create a folder `
  18. e

    CEDaR Chough Option ESA (Pre-defined Download)

    • data.europa.eu
    • data.wu.ac.at
    html, unknown
    Updated Apr 30, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Northern Ireland Spatial Data Infrastructure (2021). CEDaR Chough Option ESA (Pre-defined Download) [Dataset]. https://data.europa.eu/data/datasets/cedar-chough-option-esa-pre-defined-download?locale=lv
    Explore at:
    unknown, htmlAvailable download formats
    Dataset updated
    Apr 30, 2021
    Dataset authored and provided by
    Northern Ireland Spatial Data Infrastructure
    Description

    An inventory of 421 invertebrate records collected using pitfall traps at sites on the north Antrim coast, collated by Jim McAdam, records fall between date range 1998-2002.

    Users outside of the Spatial NI Portal please use Resource Locator 2.

  19. C

    Internet Access Technology Options

    • data.ccrpc.org
    csv
    Updated Jun 3, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Champaign County Regional Planning Commission (2022). Internet Access Technology Options [Dataset]. https://data.ccrpc.org/dataset/internet-access-options
    Explore at:
    csvAvailable download formats
    Dataset updated
    Jun 3, 2022
    Dataset authored and provided by
    Champaign County Regional Planning Commission
    License

    Open Database License (ODbL) v1.0https://www.opendatacommons.org/licenses/odbl/1.0/
    License information was derived automatically

    Description

    The Internet access indicator measures the prevalence of different Internet technology options available in Champaign County, Illinois, and the U.S., at two different speeds: 4/1 Mbps and 25/3 Mbps.

    Seven types of connection options are evaluated: ADSL, cable, fiber, fixed wireless, satellite, "other" technology, and "any" technology, which includes the previous six options.

    Satellite internet, at both speeds, is the most widely available in all three areas. One hundred percent of Champaign County residents have access to satellite internet at both speeds. Cable internet is also widely available across all three areas, and over 90 percent of Champaign County residents have access to cable internet. Fiber internet is the least widely available type of technology, aside from "other" technology. However, fiber internet is now available to almost 38 percent of Champaign County residents as of December 2020, an increase from approximately 25 percent in June 2020.

    The ability of Champaign County residents to access the Internet has become key in many facets of life, especially during the COVID-19 pandemic. Internet access provides economic, educational, and social opportunities; having or not having Internet access has become not only a technological issue, but an equity issue.

    This data was retrieved from the Federal Communications Commission’s Fixed Broadband Deployment Area Comparison, and dates from December 2020.

    Source: Federal Communications Commission. (2020). Fixed Broadband Deployment. Area Comparison. https://broadbandmap.fcc.gov/#/. (Accessed 3 June 2022).

  20. F

    CBOE Equity VIX on Google

    • fred.stlouisfed.org
    json
    Updated Sep 10, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). CBOE Equity VIX on Google [Dataset]. https://fred.stlouisfed.org/series/VXGOGCLS
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Sep 10, 2025
    License

    https://fred.stlouisfed.org/legal/#copyright-citation-requiredhttps://fred.stlouisfed.org/legal/#copyright-citation-required

    Description

    Graph and download economic data for CBOE Equity VIX on Google (VXGOGCLS) from 2010-06-01 to 2025-09-09 about VIX, volatility, equity, stock market, and USA.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
(2022). Free Data [Dataset]. https://optiondata.org/

Free Data

Free Data

Explore at:
3 scholarly articles cite this dataset (View in Google Scholar)
Dataset updated
Sep 3, 2022
License

https://optiondata.org/about.htmlhttps://optiondata.org/about.html

Time period covered
Jan 1, 2013 - Jun 30, 2013
Description

Free historical options data, dataset files in CSV format.

Search
Clear search
Close search
Google apps
Main menu