https://optiondata.org/about.htmlhttps://optiondata.org/about.html
Free historical options data, dataset files in CSV format.
https://optiondata.org/about.htmlhttps://optiondata.org/about.html
Historical option data in 2019 to 2021, dataset files in CSV format.
https://optiondata.org/about.htmlhttps://optiondata.org/about.html
Historical option data in the last 24 years, dataset files in CSV format.
https://optiondata.org/about.htmlhttps://optiondata.org/about.html
Historical option EOD data in 2021, dataset files in CSV format.
We offer three easy-to-understand packages to fit your business needs. Visit intrinio.com/pricing to compare packages.
Bronze
The Bronze package is ideal for developing your idea and prototyping your platform with high-quality EOD options prices sourced from OPRA.
When you’re ready for launch, it’s a seamless transition to our Silver package for delayed options prices, Greeks and implied volatility, and unusual options activity, plus delayed equity prices.
Exchange Fees & Requirements:
This package requires no paperwork or exchange fees.
Bronze Benefits:
Silver
The Silver package is ideal for clients that want delayed options data for their platform, or for startups in the development and testing phase. You’ll get 15-minute delayed options data, Greeks, implied volatility, and unusual options activity, plus the latest EOD options prices and delayed equity prices.
You can easily move up to the Gold package for real-time options and equity prices, additional access methods, and premium support options.
Exchange Fees & Requirements:
If you subscribe to the Silver package and will not display the data outside of your firm, you’ll need to fill out a simplified exchange agreement and send it back to us. There are no exchange fees and we can provide immediate access to the data.
If you subscribe to the Silver package and will display the data outside of your firm, we’ll work with your team to submit the correct paperwork to OPRA for approval. Once approved, OPRA will bill exchange fees directly to your firm – typically $600-$2000/month depending on your use case. These fees are the same no matter what data provider you use. Per-user reporting is not required, so there are no variable per user fees.
Silver Benefits:
Gold
The Gold package is ideal for funded companies that are in the growth or scaling stage, as well as institutions that are innovating within the fintech space. This full-service solution offers real-time options prices, Greeks and implied volatility, and unusual options activity, as well as the latest EOD options prices and real-time equity prices.
You’ll also have access to our wide range of modern access methods, third-party data via Intrinio’s API with licensing assistance, support from our team of expert engineers, custom delivery architectures, and much more.
Exchange Fees & Requirements:
If you subscribe to the Gold package, we’ll work with your team to submit the correct paperwork to OPRA for approval. Once approved, OPRA will bill exchange fees directly to your firm – typically $600-$2000/month depending on your use case. These fees are the same no matter what data provider you use. Per-user reporting is required, with an associated variable per user fee.
Gold Benefits:
Platinum
Don’t see a package that fits your needs? Our team can design a premium custom package for your business.
https://option.discount/privacy.htmlhttps://option.discount/privacy.html
Historical option sample data at 2022-08-24, dataset files in CSV format.
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
Downloading the Options IV SP500 Dataset
This document will guide you through the steps to download the Options IV SP500 dataset from Hugging Face Datasets. This dataset includes data on the options of the S&P 500, including implied volatility. To start, you'll need to install Hugging Face's datasets library if you haven't done so already. You can do this using the following pip command: !pip install datasets
Here's the Python code to load the Options IV SP500 dataset from Hugging… See the full description on the dataset page: https://huggingface.co/datasets/gauss314/options-IV-SP500.
https://www.traditiondata.com/terms-conditions/https://www.traditiondata.com/terms-conditions/
TraditionData’s FX Options Market Data service provides comprehensive information on FX options markets, leveraging the Volbroker platform for transparency and efficiency.
Visit FX Options Market Data for more information.
https://fred.stlouisfed.org/legal/#copyright-citation-requiredhttps://fred.stlouisfed.org/legal/#copyright-citation-required
Graph and download economic data for CBOE Volatility Index: VIX (VIXCLS) from 1990-01-02 to 2025-09-17 about VIX, volatility, stock market, and USA.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Download Options Score from Open Data Watch. Openness element 3 measures whether data are available with three different download options: bulk download, API, and user-select options. A bulk download is defined at the indicator level as: The ability to download all data recorded in Open Data Inventory (ODIN)for a particular indicator (all years, disaggregations, and subnational data) in one file, or multiple files that can be downloaded simultaneously. Bulk downloads are a key component of the Open Definition, which requires data to be “provided as a whole . . . and downloadable via the internet.” User-selectable download options are defined as: Users must be able to select an indicator and at least one other dimension to create a download or table. These dimensions could include time periods, geographic disaggregations, or other recommended disaggregations. An option to choose the file export format is not enough. API stands for Application Programming Interface. Ideally, APIs should be clearly displayed on the website. ODIN assumes APIs are available for the NSOs entire data collection used in ODIN, unless clearly stated. ODIN assessors do not register for use or test API functionality. For more information on APIs, see this guide. Scores are given by data category, not indicator.
https://fred.stlouisfed.org/legal/#copyright-pre-approvalhttps://fred.stlouisfed.org/legal/#copyright-pre-approval
Graph and download economic data for ICE BofA US Corporate Index Option-Adjusted Spread (BAMLC0A0CM) from 1996-12-31 to 2025-09-17 about option-adjusted spread, corporate, and USA.
https://fred.stlouisfed.org/legal/#copyright-citation-requiredhttps://fred.stlouisfed.org/legal/#copyright-citation-required
Graph and download economic data for CBOE S&P 500 3-Month Volatility Index (VXVCLS) from 2007-12-04 to 2025-09-18 about VIX, volatility, stock market, 3-month, and USA.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The purpose of this document is to accompany the public release of data collected from OpenCon 2015 applications.Download & Technical Information The data can be downloaded in CSV format from GitHub here: https://github.com/RightToResearch/OpenCon-2015-Application-Data The file uses UTF8 encoding, comma as field delimiter, quotation marks as text delimiter, and no byte order mark.
This data is released to the public for free and open use under a CC0 1.0 license. We have a couple of requests for anyone who uses the data. First, we’d love it if you would let us know what you are doing with it, and share back anything you develop with the OpenCon community (#opencon / @open_con ). Second, it would also be great if you would include a link to the OpenCon 2015 website (www.opencon2015.org) wherever the data is used. You are not obligated to do any of this, but we’d appreciate it!
Unique ID
This is a unique ID assigned to each applicant. Numbers were assigned using a random number generator.
Timestamp
This was the timestamp recorded by google forms. Timestamps are in EDT (Eastern U.S. Daylight Time). Note that the application process officially began at 1:00pm EDT June 1 ended at 6:00am EDT on June 23. Some applications have timestamps later than this date, and this is due to a variety of reasons including exceptions granted for technical difficulties, error corrections (which required re-submitting the form), and applications sent in via email and later entered manually into the form. [a]
Gender
Mandatory. Choose one from list or fill-in other. Options provided: Male, Female, Other (fill in).
Country of Nationality
Mandatory. Choose one option from list.
Country of Residence
Mandatory. Choose one option from list.
What is your primary occupation?
Mandatory. Choose one from list or fill-in other. Options provided: Undergraduate student; Masters/professional student; PhD candidate; Faculty/teacher; Researcher (non-faculty); Librarian; Publisher; Professional advocate; Civil servant / government employee; Journalist; Doctor / medical professional; Lawyer; Other (fill in).
Select the option below that best describes your field of study or expertise
Mandatory. Choose one option from list.
What is your primary area of interest within OpenCon’s program areas?
Mandatory. Choose one option from list. Note: for the first approximately 24 hours the options were listed in this order: Open Access, Open Education, Open Data. After that point, we set the form to randomize the order, and noticed an immediate shift in the distribution of responses.
Are you currently engaged in activities to advance Open Access, Open Education, and/or Open Data?
Mandatory. Choose one option from list.
Are you planning to participate in any of the following events this year?
Optional. Choose all that apply from list. Multiple selections separated by semi-colon.
Do you have any of the following skills or interests?
Mandatory. Choose all that apply from list or fill-in other. Multiple selections separated by semi-colon. Options provided: Coding; Website Management / Design; Graphic Design; Video Editing; Community / Grassroots Organizing; Social Media Campaigns; Fundraising; Communications and Media; Blogging; Advocacy and Policy; Event Logistics; Volunteer Management; Research about OpenCon's Issue Areas; Other (fill-in).
This data consists of information collected from people who applied to attend OpenCon 2015. In the application form, questions that would be released as Open Data were marked with a caret (^) and applicants were asked to acknowledge before submitting the form that they understood that their responses to these questions would be released as such. The questions we released were selected to avoid any potentially sensitive personal information, and to minimize the chances that any individual applicant can be positively identified. Applications were formally collected during a 22 day period beginning on June 1, 2015 at 13:00 EDT and ending on June 23 at 06:00 EDT. Some applications have timestamps later than this date, and this is due to a variety of reasons including exceptions granted for technical difficulties, error corrections (which required re-submitting the form), and applications sent in via email and later entered manually into the form. Applications were collected using a Google Form embedded at http://www.opencon2015.org/attend, and the shortened bit.ly link http://bit.ly/AppsAreOpen was promoted through social media. The primary work we did to clean the data focused on identifying and eliminating duplicates. We removed all duplicate applications that had matching e-mail addresses and first and last names. We also identified a handful of other duplicates that used different e-mail addresses but were otherwise identical. In cases where duplicate applications contained any different information, we kept the information from the version with the most recent timestamp. We made a few minor adjustments in the country field for cases where the entry was obviously an error (for example, electing a country listed alphabetically above or below the one indicated elsewhere in the application). We also removed one potentially offensive comment (which did not contain an answer to the question) from the Gender field and replaced it with “Other.”
OpenCon 2015 is the student and early career academic professional conference on Open Access, Open Education, and Open Data and will be held on November 14-16, 2015 in Brussels, Belgium. It is organized by the Right to Research Coalition, SPARC (The Scholarly Publishing and Academic Resources Coalition), and an Organizing Committee of students and early career researchers from around the world. The meeting will convene students and early career academic professionals from around the world and serve as a powerful catalyst for projects led by the next generation to advance OpenCon's three focus areas—Open Access, Open Education, and Open Data. A unique aspect of OpenCon is that attendance at the conference is by application only, and the majority of participants who apply are awarded travel scholarships to attend. This model creates a unique conference environment where the most dedicated and impactful advocates can attend, regardless of where in the world they live or their access to travel funding. The purpose of the application process is to conduct these selections fairly. This year we were overwhelmed by the quantity and quality of applications received, and we hope that by sharing this data, we can better understand the OpenCon community and the state of student and early career participation in the Open Access, Open Education, and Open Data movements.
For inquires about the OpenCon 2015 Application data, please contact Nicole Allen at nicole@sparc.arl.org.
Our proprietary Skew-Adjusted Gamma Exposure measurements make adjustments to Naive GEX calculations to more accurately reflect actual gamma positioning of Market Makers who employ delta-hedging strategies. When Market Makers carry substantial negative gamma a security will often "over-react" to fundamental news. Conversely, when MMs carry substantial positive gamma a security will often "under-react" to news. Our data includes a quantified segmentation of a security's gamma distribution across all option strikes as well as across relevant expiration dates. Our website provides numerical, graphical, and historical views of all gamma data in our database. Additionally, our API access allows for easy download of csv files or import into Excel for further analysis and custom applications.
Note: To download this raster dataset, go to ArcGIS Open Data Set and click the download button, and under additional resources select raster download option; the data can also be downloaded directly from the FSGeodata Clearinghouse. To summarize this dataset by U.S. Forest Service Lands, see the Drought Summary Tool. You can also explore cumulative drought and moisture changes from this StoryMap; additional drought products from the Office of Sustainability and Climate are available in our Climate Gallery and the OSC Drought page.The Moisture Deficit and Surplus map uses moisture difference z-score datasets developed by scientists Frank Koch, John Coulston, and William Smith of the Forest Service Southern Research Station. A z-score is a statistical method for assessing how different a value is from the mean (average). Mean moisture values were derived from historical data on precipitation and potential evapotranspiration, from 1900 to 2023. The greater the z-value, the larger the departure from average conditions, indicating larger moisture deficits or surpluses. Thus, the dark red areas on this map indicate a five-year period with extremely dry conditions, relative to the average conditions over the past century. For further reading on the methodology used to build these maps, see the publication here: https://www.fs.usda.gov/treesearch/pubs/43361
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
A Graph Convolutional Neural Network-based Method for Predicting Computational Intensity of GeocomputationThis is the implementation for the paper "A Graph Convolutional Neural Network-based Method for Predicting Computational Intensity of Geocomputation".The framework is Learning-based Computing Framework for Geospatial data(LCF-G).Prediction, ParallelComputation and SampleGeneration.This paper includes three case studies, each corresponding to a folder. Each folder contains four subfolders: data, CIThe data folder contains geospatail data.The CIPrediction folder contains model training code.The ParallelComputation folder contains geographic computation code.The SampleGeneration folder contains code for sample generation.Case 1: Generation of DEM from point cloud datastep 1: Data downloadDataset 1 has been uploaded to the directory 1point2dem/data. The other two datasets, Dataset 2 and Dataset 3, can be downloaded from the following website:OpenTopographyBelow are the steps for downloading Dataset 2 and Dataset 3, along with the query parameters:Dataset 2:Visit OpenTopography Website: Go to Dataset 2 Download Link.https://portal.opentopography.org/lidarDataset?opentopoID=OTLAS.112018.2193.1Coordinates & Classification:In the section "1. Coordinates & Classification", select the option "Manually enter selection coordinates".Set the coordinates as follows: Xmin = 1372495.692761,Ymin = 5076006.86821,Xmax = 1378779.529766,Ymax = 5085586.39531Point Cloud Data Download:Under section "2. Point Cloud Data Download", choose the option "Point cloud data in LAS format".Submit:Click on "SUBMIT" to initiate the download.Dataset 3:Visit OpenTopography Website:Go to Dataset 3 Download Link: https://portal.opentopography.org/lidarDataset?opentopoID=OTLAS.052016.26912.1Coordinates & Classification:In the section "1. Coordinates & Classification", select the option "Manually enter selection coordinates".Set the coordinates as follows:Xmin = 470047.153826,Ymin = 4963418.512121,Xmax = 479547.16556,Ymax = 4972078.92768Point Cloud Data Download:Under section "2. Point Cloud Data Download", choose the option "Point cloud data in LAS format".Submit:Click on "SUBMIT" to initiate the download.step 2: Sample generationThis step involves data preparation, and samples can be generated using the provided code. Since the samples have already been uploaded to 1point2dem/SampleGeneration/data, this step is optional.cd 1point2dem/SampleGenerationg++ PointCloud2DEMSampleGeneration.cpp -o PointCloud2DEMSampleGenerationmpiexec -n {number_processes} ./PointCloud2DEMSampleGeneration ../data/pcd path/to/outputstep 3: Model trainingThis step involves training three models (GAN, CNN, GAT). The model results are saved in 1point2dem/SampleGeneration/result, and the results for Table 3 in the paper are derived from this output.cd 1point2dem/CIPredictionpython -u point_prediction.py --model [GCN|ChebNet|GATNet]step 4: Parallel computationThis step uses the trained models to optimize parallel computation. The results for Figures 11-13 in the paper are generated from the output of this command.cd 1point2dem/ParallelComputationg++ ParallelPointCloud2DEM.cpp -o ParallelPointCloud2DEMmpiexec -n {number_processes} ./ParallelPointCloud2DEM ../data/pcdCase 2: Spatial intersection of vector datastep 1: Data downloadSome data from the paper has been uploaded to 2intersection/data. The remaining OSM data can be downloaded from GeoFabrik. Below are the download steps and parameters:Directly click the following link to download the OSM data: GeoFabrik - Czech Republic OSM Datastep 2: Sample generationThis step involves data preparation, and samples can be generated using the provided code. Since the samples have already been uploaded to 2intersection/SampleGeneration/data, this step is optional.cd 2intersection/SampleGenerationg++ ParallelIntersection.cpp -o ParallelIntersectionmpiexec -n {number_processes} ./ParallelIntersection ../data/shpfile ../data/shpfilestep 3: Model trainingThis step involves training three models (GAN, CNN, GAT). The model results are saved in 2intersection/SampleGeneration/result, and the results for Table 5 in the paper are derived from this output.cd 2intersection/CIPredictionpython -u vector_prediction.py --model [GCN|ChebNet|GATNet]step 4: Parallel computationThis step uses the trained models to optimize parallel computation. The results for Figures 14-16 in the paper are generated from the output of this command.cd 2intersection/ParallelComputationg++ ParallelIntersection.cpp -o ParallelIntersectionmpiexec -n {number_processes} ./ParallelIntersection ../data/shpfile1 ../data/shpfile2Case 3: WOfS analysis using raster datastep 1: Data downloadSome data from the paper has been uploaded to 3wofs/data. The remaining data can be downloaded from http://openge.org.cn/advancedRetrieval?type=dataset. Below are the query parameters:Product Selection: Select LC08_L1TP
and LC08_L1GT
Latitude and Longitude Selection:Minimum Longitude: 112.5,Maximum Longitude: 115.5, Minimum Latitude: 29.5, Maximum Latitude: 31.5Time Range: 2013-01-01 to 2018-12-31Other parameters: Defaultstep 2: Sample generationThis step involves data preparation, and samples can be generated using the provided code. Since the samples have already been uploaded to 3wofs/SampleGeneration/data, this step is optional.cd 3wofs/SampleGenerationsbt packeagespark-submit --master {host1,host2,host3} --class whu.edu.cn.core.cube.raster.WOfSSampleGeneration path/to/package.jarstep 3: Model trainingThis step involves training three models (GAN, CNN, GAT). The model results are saved in 3wofs/SampleGeneration/result, and the results for Table 6 in the paper are derived from this output.cd 3wofs/CIPredictionpython -u raster_prediction.py --model [GCN|ChebNet|GATNet]step 4: Parallel computationThis step uses the trained models to optimize parallel computation. The results for Figures 18, 19 in the paper are generated from the output of this command.cd 3wofs/ParallelComputationsbt packeagespark-submit --master {host1,host2,host3} --class whu.edu.cn.core.cube.raster.WOfSOptimizedByDL path/to/package.jar path/to/outputStatement about Case 3The experiment Case 3 presented in this paper was conducted with improvements made on the GeoCube platform.Code Name: GeoCubeCode Link: GeoCube Source CodeLicense Information: The GeoCube project is openly available under the CC BY 4.0 license.The GeoCube project is licensed under CC BY 4.0, which is the Creative Commons Attribution 4.0 International License, allowing anyone to freely share, modify, and distribute the platform's code.Citation:Gao, Fan (2022). A multi-source spatio-temporal data cube for large-scale geospatial analysis. figshare. Software. https://doi.org/10.6084/m9.figshare.15032847.v1Clarification Statement:The authors of this code are not affiliated with this manuscript. The innovations and steps in Case 3, including data download, sample generation, and parallel computation optimization, were independently developed and are not dependent on the GeoCube’s code.RequirementsThe codes use the following dependencies with Python 3.8torch==2.0.0torch_geometric==2.5.3networkx==2.6.3pyshp==2.3.1tensorrt==8.6.1matplotlib==3.7.2scipy==1.10.1scikit-learn==1.3.0geopandas==0.13.2
https://www.gnu.org/licenses/old-licenses/gpl-2.0-standalone.htmlhttps://www.gnu.org/licenses/old-licenses/gpl-2.0-standalone.html
Replication pack, FSE2018 submission #164: ------------------------------------------
**Working title:** Ecosystem-Level Factors Affecting the Survival of Open-Source Projects: A Case Study of the PyPI Ecosystem **Note:** link to data artifacts is already included in the paper. Link to the code will be included in the Camera Ready version as well. Content description =================== - **ghd-0.1.0.zip** - the code archive. This code produces the dataset files described below - **settings.py** - settings template for the code archive. - **dataset_minimal_Jan_2018.zip** - the minimally sufficient version of the dataset. This dataset only includes stats aggregated by the ecosystem (PyPI) - **dataset_full_Jan_2018.tgz** - full version of the dataset, including project-level statistics. It is ~34Gb unpacked. This dataset still doesn't include PyPI packages themselves, which take around 2TB. - **build_model.r, helpers.r** - R files to process the survival data (`survival_data.csv` in **dataset_minimal_Jan_2018.zip**, `common.cache/survival_data.pypi_2008_2017-12_6.csv` in **dataset_full_Jan_2018.tgz**) - **Interview protocol.pdf** - approximate protocol used for semistructured interviews. - LICENSE - text of GPL v3, under which this dataset is published - INSTALL.md - replication guide (~2 pages)
Replication guide ================= Step 0 - prerequisites ---------------------- - Unix-compatible OS (Linux or OS X) - Python interpreter (2.7 was used; Python 3 compatibility is highly likely) - R 3.4 or higher (3.4.4 was used, 3.2 is known to be incompatible) Depending on detalization level (see Step 2 for more details): - up to 2Tb of disk space (see Step 2 detalization levels) - at least 16Gb of RAM (64 preferable) - few hours to few month of processing time Step 1 - software ---------------- - unpack **ghd-0.1.0.zip**, or clone from gitlab: git clone https://gitlab.com/user2589/ghd.git git checkout 0.1.0 `cd` into the extracted folder. All commands below assume it as a current directory. - copy `settings.py` into the extracted folder. Edit the file: * set `DATASET_PATH` to some newly created folder path * add at least one GitHub API token to `SCRAPER_GITHUB_API_TOKENS` - install docker. For Ubuntu Linux, the command is `sudo apt-get install docker-compose` - install libarchive and headers: `sudo apt-get install libarchive-dev` - (optional) to replicate on NPM, install yajl: `sudo apt-get install yajl-tools` Without this dependency, you might get an error on the next step, but it's safe to ignore. - install Python libraries: `pip install --user -r requirements.txt` . - disable all APIs except GitHub (Bitbucket and Gitlab support were not yet implemented when this study was in progress): edit `scraper/init.py`, comment out everything except GitHub support in `PROVIDERS`. Step 2 - obtaining the dataset ----------------------------- The ultimate goal of this step is to get output of the Python function `common.utils.survival_data()` and save it into a CSV file: # copy and paste into a Python console from common import utils survival_data = utils.survival_data('pypi', '2008', smoothing=6) survival_data.to_csv('survival_data.csv') Since full replication will take several months, here are some ways to speedup the process: ####Option 2.a, difficulty level: easiest Just use the precomputed data. Step 1 is not necessary under this scenario. - extract **dataset_minimal_Jan_2018.zip** - get `survival_data.csv`, go to the next step ####Option 2.b, difficulty level: easy Use precomputed longitudinal feature values to build the final table. The whole process will take 15..30 minutes. - create a folder `
An inventory of 421 invertebrate records collected using pitfall traps at sites on the north Antrim coast, collated by Jim McAdam, records fall between date range 1998-2002.
Users outside of the Spatial NI Portal please use Resource Locator 2.
Open Database License (ODbL) v1.0https://www.opendatacommons.org/licenses/odbl/1.0/
License information was derived automatically
The Internet access indicator measures the prevalence of different Internet technology options available in Champaign County, Illinois, and the U.S., at two different speeds: 4/1 Mbps and 25/3 Mbps.
Seven types of connection options are evaluated: ADSL, cable, fiber, fixed wireless, satellite, "other" technology, and "any" technology, which includes the previous six options.
Satellite internet, at both speeds, is the most widely available in all three areas. One hundred percent of Champaign County residents have access to satellite internet at both speeds. Cable internet is also widely available across all three areas, and over 90 percent of Champaign County residents have access to cable internet. Fiber internet is the least widely available type of technology, aside from "other" technology. However, fiber internet is now available to almost 38 percent of Champaign County residents as of December 2020, an increase from approximately 25 percent in June 2020.
The ability of Champaign County residents to access the Internet has become key in many facets of life, especially during the COVID-19 pandemic. Internet access provides economic, educational, and social opportunities; having or not having Internet access has become not only a technological issue, but an equity issue.
This data was retrieved from the Federal Communications Commission’s Fixed Broadband Deployment Area Comparison, and dates from December 2020.
Source: Federal Communications Commission. (2020). Fixed Broadband Deployment. Area Comparison. https://broadbandmap.fcc.gov/#/. (Accessed 3 June 2022).
https://fred.stlouisfed.org/legal/#copyright-citation-requiredhttps://fred.stlouisfed.org/legal/#copyright-citation-required
Graph and download economic data for CBOE Equity VIX on Google (VXGOGCLS) from 2010-06-01 to 2025-09-09 about VIX, volatility, equity, stock market, and USA.
https://optiondata.org/about.htmlhttps://optiondata.org/about.html
Free historical options data, dataset files in CSV format.