Unfortunately, the API this dataset used to pull the stock data isn't free anymore. Instead of having this auto-updating, I dropped the last version of the data files in here, so at least the historic data is still usable.
This dataset provides free end of day data for all stocks currently in the Dow Jones Industrial Average. For each of the 30 components of the index, there is one CSV file named by the stock's symbol (e.g. AAPL for Apple). Each file provides historically adjusted market-wide data (daily, max. 5 years back). See here for description of the columns: https://iextrading.com/developer/docs/#chart
Since this dataset uses remote URLs as files, it is automatically updated daily by the Kaggle platform and automatically represents the latest data.
List of stocks and symbols as per https://en.wikipedia.org/wiki/Dow_Jones_Industrial_Average
Thanks to https://iextrading.com for providing this data for free!
Data provided for free by IEX. View IEX’s Terms of Use.
Cbonds collects and normalizes indices data, offering daily updated and historical data on over 40,000 indices, including macroeconomic indicators, yield curves and spreads, currency markets, stock and funds markets, and commodities. Using the Indices API, you can access an index's holdings, such as its assets, sectors, and weight, as well as basic data on the asset. You can obtain end-of-day, and historical API indicator prices in CSV, XLS, and JSON formats. Cbonds provides a free Indices API for a limited test period of two weeks or for a longer period with a limited number of instruments.
https://finazon.io/assets/files/Finazon_Terms_of_Service.pdfhttps://finazon.io/assets/files/Finazon_Terms_of_Service.pdf
The best choice for those looking for license-free US market data for commercial use is US Equities Basic, which includes data display, redistribution, professional trading, and more.
US Equities Basic is based upon a derived IEX feed. The volume coverage is 3-5% of the total trading volume in North America, which helps entities mitigate license expenses and start with real-time data.
US Equities Basic provides raw quotes, trades, aggregated time series (OHLCV), and snapshots. Both REST API and WebSocket API are available.
End-of-day price information disseminated after 12:00 AM EST does not require licensing in the United States by law. This applies to all exchanges, even those not included in the US Equities Basic. Finazon combines all price information after every trading day, meaning that while markets are open, real-time prices are available from a subset of exchanges, and when markets close, data is synced and contains 100% of US volume. All historical prices are adjusted for corporate actions and splits.
Tip: Individuals with non-professional usage are not required to get exchange licenses for real-time data and, hence, are better off with the US Equities Max dataset.
This dataset offers both live (delayed) prices and End Of Day time series on equity options
1/ Live (delayed) prices for options on European stocks and indices including:
Reference spot price, bid/ask screen price, fair value price (based on surface calibration), implicit volatility, forward
Greeks : delta, vega
Canari.dev computes AI-generated forecast signals indicating which option is over/underpriced, based on the holders strategy (buy and hold until maturity, 1 hour to 2 days holding horizon...). From these signals is derived a "Canari price" which is also available in this live tables.
Visit our website (canari.dev ) for more details about our forecast signals.
The delay ranges from 15 to 40 minutes depending on underlyings.
2/ Historical time series:
Implied vol
Realized vol
Smile
Forward
See a full API presentation here : https://youtu.be/qitPO-SFmY4 .
These data are also readily accessible in Excel thanks the provided Add-in available on Github: https://github.com/canari-dev/Excel-macro-to-consume-Canari-API
If you need help, contact us at: contact@canari.dev
User Guide: You can get a preview of the API by typing "data.canari.dev" in your web browser. This will show you a free version of this API with limited data.
Here are examples of possible syntaxes:
For live options prices: data.canari.dev/OPT/DAI data.canari.dev/OPT/OESX/0923 The "csv" suffix to get a csv rather than html formating, for example: data.canari.dev/OPT/DB1/1223/csv For historical parameters: Implied vol : data.canari.dev/IV/BMW
data.canari.dev/IV/ALV/1224
data.canari.dev/IV/DTE/1224/csv
Realized vol (intraday, maturity expressed as EWM, span in business days): data.canari.dev/RV/IFX ... Implied dividend flow: data.canari.dev/DIV/IBE ... Smile (vol spread between ATM strike and 90% strike, normalized to 1Y with factor 1/√T): data.canari.dev/SMI/DTE ... Forward: data.canari.dev/FWD/BNP ...
List of available underlyings: Code Name OESX Eurostoxx50 ODAX DAX OSMI SMI (Swiss index) OESB Eurostoxx Banks OVS2 VSTOXX ITK AB Inbev ABBN ABB ASM ASML ADS Adidas AIR Air Liquide EAD Airbus ALV Allianz AXA Axa BAS BASF BBVD BBVA BMW BMW BNP BNP BAY Bayer DBK Deutsche Bank DB1 Deutsche Boerse DPW Deutsche Post DTE Deutsche Telekom EOA E.ON ENL5 Enel INN ING IBE Iberdrola IFX Infineon IES5 Intesa Sanpaolo PPX Kering LOR L Oreal MOH LVMH LIN Linde DAI Mercedes-Benz MUV2 Munich Re NESN Nestle NOVN Novartis PHI1 Philips REP Repsol ROG Roche SAP SAP SNW Sanofi BSD2 Santander SND Schneider SIE Siemens SGE Société Générale SREN Swiss Re TNE5 Telefonica TOTB TotalEnergies UBSN UBS CRI5 Unicredito SQU Vinci VO3 Volkswagen ANN Vonovia ZURN Zurich Insurance Group
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset contains historical price data for Bitcoin (BTC/USDT) from January 1, 2018, to the present. The data is sourced using the Binance API, providing granular candlestick data in four timeframes: - 15-minute (15M) - 1-hour (1H) - 4-hour (4H) - 1-day (1D)
This dataset includes the following fields for each timeframe: - Open time: The timestamp for when the interval began. - Open: The price of Bitcoin at the beginning of the interval. - High: The highest price during the interval. - Low: The lowest price during the interval. - Close: The price of Bitcoin at the end of the interval. - Volume: The trading volume during the interval. - Close time: The timestamp for when the interval closed. - Quote asset volume: The total quote asset volume traded during the interval. - Number of trades: The number of trades executed within the interval. - Taker buy base asset volume: The volume of the base asset bought by takers. - Taker buy quote asset volume: The volume of the quote asset spent by takers. - Ignore: A placeholder column from Binance API, not used in analysis.
Binance API: Used for retrieving 15-minute, 1-hour, 4-hour, and 1-day candlestick data from 2018 to the present.
This dataset is automatically updated every day using a custom Python program.
The source code for the update script is available on GitHub:
🔗 Bitcoin Dataset Kaggle Auto Updater
This dataset is provided under the CC0 Public Domain Dedication. It is free to use for any purpose, with no restrictions on usage or redistribution.
https://optionmetrics.com/contact/https://optionmetrics.com/contact/
The IvyDB Signed Volume dataset, available as an add-on product for IvyDB US, contains daily data on detailed option trading volume. Trades in the IvyDB US dataset are assigned as either buyer-initiated or seller-initiated based on the trade price and the bid-ask quote at the time of the trade. The total assigned daily volume is aggregated and updated nightly.
Argus is a prominent source of pricing evaluations and business insights extensively utilized in the energy and commodity sectors, specifically for physical supply agreements and the settlement and clearing of financial derivatives. Argus pricing is also employed as a benchmark in swaps markets, for mark-to-market valuations, project financing, taxation, royalties, and risk management. Argus provides comprehensive services globally and continuously develops new assessments to mirror evolving market dynamics and trends. Covered assets encompass Energy, Oil, Refined Products, Power, Gas, Generation fuels, Petrochemicals, Transport, and Metals.
Databento provides upcoming and historical corporate actions impacting over 310,000 global securities, including every company announcement and 61 events like dividends, splits, mergers & acquisitions, listings, and more.
Dividends: Upcoming and past dividends, declaration, ex-dividend, record, and payment dates.
Forward and reverse splits: Capital changes like forward splits and reverse splits with effective dates.
Adjustment factors: To back-adjust end-of-day prices, EPS, P/E and other prices for all corporate actions.
Mergers and acquisitions: Ticker changes caused by mergers, acquisitions, demergers, spinoffs, and more.
IPOs and new listings: Upcoming and historical listings like initial public offerings (IPOs), with listing dates.
Listing continuity: Listing continuity events like name changes, delistings, and description changes.
Capital changes: Such as share buybacks, redemptions, bonus issues, and rights issues.
Legal actions: Legal issues like bankruptcy and class action lawsuits, with filing and notice dates.
Announcements: Machine-readable announcements from over 400 sources, timestamped to the second.
Our reference API has the following structure:
Corporate actions provides point-in-time (PIT) corporate actions events with global coverage.
Adjustment factors provides end-of-day price adjustment factors for capital events, spanning multiple currencies for the same event.
https://fred.stlouisfed.org/legal/#copyright-pre-approvalhttps://fred.stlouisfed.org/legal/#copyright-pre-approval
View data of the S&P 500, an index of the stocks of 500 leading companies in the US economy, which provides a gauge of the U.S. equity market.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Overall, this project was meant test the relationship between social media posts and their short-term effect on stock prices. We decided to use Reddit posts from financial specific subreddit communities like r/wallstreetbets, r/investing, and r/stocks to see the changes in the market associated with a variety of posts made by users. This idea came to light because of the GameStop short squeeze that showed the power of social media in the market. Typically, stock prices should purely represent the total present value of all the future value of the company, but the question we are asking is whether social media can impact that intrinsic value. Our research question was known from the start and it was do Reddit posts for or against a certain stock provide insight into how the market will move in a short window. To solve this problem, we selected five large tech companies including Apple, Tesla, Amazon, Microsoft, and Google. These companies would likely give us more data in the subreddits and would have less volatility day to day allowing us to simulate an experiment easier. They trade at very high values so a change from a Reddit post would have to be significant giving us proof that there is an effect.
Next, we had to choose our data sources for to have data to test with. First, we tried to locate the Reddit data using a Reddit API, but due to circumstances regarding Reddit requiring approval to use their data we switched to a Kaggle dataset that contained metadata from Reddit. For our second data set we had planned to use Yahoo Finance through yfinance, but due to the large amount of data we were pulling from this public API our IP address was temporarily blocked. This caused us to switch our second data to pull from Alpha Vantage. While this was a large switch in the public it was a minor roadblock and fixing the Finance pulling section allowed for everything else to continue to work in succession. Once we had both of our datasets programmatically pulled into our local vs code, we implemented a pipeline to clean, merge, and analyze all the data. At the end, we implement a Snakemake workflow to ensure the project was easily reproducible. To continue, we utilized Textblob to label our Reddit posts with a sentiment value of positive, negative, or neutral and provide us with a correlation value to analyze with. We then matched the time frame of each post with the stock data and computed any possible changes, found a correlation coefficient, and graphed our findings.
To conclude the data analysis, we found that there is relatively small or no correlation between the total companies, but Microsoft and Google do show stronger correlations when analyzed on their own. However, this may be due to other circumstances like why the post was made or if the market had other trends on those dates already. A larger analysis with more data from other social media platforms would be needed to conclude for our hypothesis that there is a strong correlation.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
▲ INSEE modernises its API portal, building on a new architecture ▲: The general conditions of use of the portal, as well as those of the APIs presented, remain unchanged. Under the URL of the new portal https://portail-api.insee.fr/, you will find INSEE’s dissemination APIs. Access to the SIRENE API: > To access the SIRENE API, you must first create an account on the new portal and then subscribe to the API. > Instructions for use here. > The same account can subscribe to several APIs, following the same procedure. !! Attention, we advise you to quickly integrate this new environment, by the end of the year, the Sirene API will be accessible only from this new portal. !! *** To subscribe to our newsletter Sirene open data news, click here To consult our newsletters Sirene open data news, click here Stock files > - On May 1st, a new monthly file stockDoublons is proposed, it consists of the list of siren and their duplicates with the date of last treatment always in CSV format. > - The final stock files in 3.11 format are published on 26 March 2024 instead of the previous 3.9 files. Six compacted monthly stock files (ZIP format) are available: - the stock file of legal units (active and discontinued legal units in their current state in the directory) - the stock file of the historical values of the legal units - the inventory file of establishments (active and closed establishments in their current state in the directory) - the stock file of the historical values of the establishments - the stock file of the succession links of the establishments - the stock file of duplicate siren Each compacted file (ZIP format) contains a data file in CSV format. Files uploaded from the 1st of the month are an image of the Sirene directory as of the last day of the previous month. A stock file of a given month replaces that of the previous month. Discontinued legal units and closed establishments are included, thus providing access to Sirene data since 1973. Updates Infra-monthly updates of these files, including daily updates, are possible: - using the SIRENE APIs available on the catalogue of the INSEE APIs. With the API, you have access to variables indicating, for both establishments and legal units, the date of the last processing carried out. These are the variables dateLastUniteLegalTreatment and dateLastEstablishmentTreatment. Since this date is different from the date of the same record in your stock file, you know that an update has been made. Documentation on Sirene API variables and services is available on the [Documentation] tab (https://porttail-api.insee.fr/catalog/api/2ba0e549-5587-3ef1-9082-99cd865de66f/doc?page=52d26f24-963b-4fc0-926f-24963b4fc021) of each API; - using "Build a list" on sirene.fr (select Update Date tab) to be able to download files consisting of daily updates. You can consult the Sirene letter open data news n°2. Siren database containing personal data, INSEE draws your attention to the legal obligations arising therefrom: - The processing of these data falls in particular under the obligations of the General Data Protection Regulation (GDPR), of Law 78-17 of 6 January 1978 as amended, known as CNIL Law - Depending on your use of the dataset, it is therefore your responsibility to take into account the most recent distribution status of each natural person, which takes into account the objections made by some of them, to the consultation or use of their SIRENE data by third parties other than authorized administrations or bodies. - Legal units or establishments which have a distribution status coded ‘P’ (resp. statusDiffusionUniteLegale or statusDiffusionEtablissement) are subject to partial dissemination of data following a request for opposition. For an objection by a natural person, the identity of the entrepreneur (surname, first names, etc.), the address in the municipality and the geolocation will be masked (i.e. not disseminated by the SIRENE API). In case of opposition by legal representatives of a legal person, the address of the establishment in the municipality and its geolocation will be hidden. It is understood that data relating to legal representatives are not disseminated by INSEE as Open Data, even in the absence of opposition, in accordance with**Article R 123-232** of the French Commercial Code. If you are a company: - ATTENTION , for any request to create, modify or change your administrative situation, please contact Guichet Unique - ATTENTION, no request of this type arriving on this site can be satisfied.
EDI's history of corporate action events dates back to January 2007 and uses unique Security IDs that can track the history of events by issuer since January 2007.
Choose to receive accurate corporate actions data via an SFTP connection either 4x daily or end-of-day. Proprietary format. ISO 15022 message standard, providing MT564 & 568 announcements.
Open Government Licence - Canada 2.0https://open.canada.ca/en/open-government-licence-canada
License information was derived automatically
Approximately 20% of Winnipeg Transit buses are equipped with automated passenger counting sensors at each door that count the number of people entering (boarding) and exiting (alighting) the bus along with the relevant bus stop information.
This data is extrapolated into an estimate of the average daily boardings and alightings, aggregated by route, stop, and time of day, for each day type (weekday, Saturday, Sunday). Data is collected over time ranges corresponding to regular seasonal schedules (September-December, December-April, April-June, June-September), and will be uploaded within 30 days after the end of each seasonal schedule.
The time of day field for weekdays is defined as AM Peak (05:00-09:00), Mid-Day (09:00-15:30), PM Peak (15:30-18:30), Evening (18:30-22:30), and Night (22:30-end of service). For Saturdays and Sundays, it is defined as Morning (05:00-11:00), Afternoon (11:00-19:00), Evening (19:00-22:30), and Night (22:30-end of service).
Due to detection errors and small sample sizes in some cases, boarding numbers may not exactly match alighting numbers. On-request passenger counts are not included in this data set.
More transit data can be found on Winnipeg Transit's Open Data Web Service, located here: https://api.winnipegtransit.com/home/api/v3
http://spdx.org/licenses/CC0-1.0http://spdx.org/licenses/CC0-1.0
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Vaisala CL51 ceilometer. Accurately measures ceiling or base height of cloud layers using pulsed diode lidar technology and single lens optics.
The CL51 model is designed for high-range, cirrus cloud height profiling that also includes detailed data on low and middle layer clouds as well as vertical visibility. It has a detection range up to 15 km.
This instrument is installed at the Trollhaugen Observatory at Troll Station in Dronning Maud Land Antarctica.
The instrument is at an elevation of 1560 m above sea level, 300 m above the Troll Airfield.
The images are plots from Vaisala's BLView software, with colours showing backscatter intensity, and dots indicating cloud base heights, covering a 24-hour period. See plot description from Vaisala. Heights in the plots (y-axis) are above the instrument, not relative to sea level or the airfield. Time in the plots (x-axis) is in UTC, ending on the date given at the top of the plot (plots that start/end at 00:00 show the date of the day just starting at the end of the plot, so a plot showing data from 00:00 to 00:00 and dated 01 desember shows data from 30 November at 00:00 UTC until 1 December at 00:00 UTC).
SEPTA SMS Transit enables users to request scheduled trip information via text message. Users subscribe to the service via text. After setting up an account, users can receive schedule information by texting the Stop ID number for a bus, trolley, or subway stop to 41411. They will receive a return text with information on the four next scheduled trips from that stop. Users can include the specific route designation in the text to receive information on a certain route if the stop serves multiple routes. In addition to using the SMS, there is also a simulator which people can use to experiment at no cost. Finally, the SMS data can be accessed from an API. The data returned by the API is currently text format, separated. The API can be accessed in the format: https://www3.septa.org/sms/var1/var2/var3/var4/var5 [var1] = stop id [var2] = route id OR i/o for inbound/outbound [var3] = i/o for inbound/outbound only if route id is supplied [var4] = returns schedule times on or after specified date, format: MM/DD/YYYY. Defaults to current day. [var5] = returns schedule times on or after specified time, format: HH:mm:ss. Defaults to current time. Stops fall into one of three categories, here is an explanation with some sample links: Stops with service provided by only one route, stop is not the first or last stop andall travel is in a single direction: https://www3.septa.org/sms/321 Returns the next 4 scheduled trolleys (All Route 13) at Chester Ave & 49th St. Stops with service provided by multiple routes, but all travel is in one direction: https://www3.septa.org/sms/20645/ Returns the next 4 scheduled trolleys at 22nd St. Station. Note the results shows trolleys regardless or route. To grep just a single route, for a multi-route, uni-directional stop, add another var: https://www3.septa.org/sms/20645/13/ Returns only the Route 13 trolleys at 22nd St. Station Stops with travel in multi-directions. This is usually end points, like the trolley loop at Juniper and they may or may not have multiple routes. For example: https://www3.septa.org/sms/283 Returns the next 2 inbound and 2 outbound times for all routes https://www3.septa.org/sms/283/13/ Returns the next 2 inbound and 2 outbound times for only Route 13 https://www3.septa.org/sms/283/o Returns the next 4 outbound times for all routes https://www3.septa.org/sms/283/13/o Returns the next 4 outbound times for only Route 13
Historic data updated on 07/14/2023. Q4 of 2023 and data for all years on systems allowing parking outside of a docking station updated on 06/04/2024. Bikeshare ridership by system, year, and month for bikeshare systems with docking stations. Data available by month starting in January 2019. Months are rearranged to include the same number of days of the week across years (see below). Data designed to show the impacts of COVID-19 on bikeshare ridership as featured at https://maps.dot.gov/BTS/dockedbikeshare-COVID/ Ridership data not available for all docked bikeshare systems. Only docked bikeshare systems with ridership data shown. Some systems included in the data permit users to leave a bicycle outside of a docking station; these trips are indicated by the trip type. Trips defined as rides from point A to B. If user makes trip from B to A on same day, counted as a second trip. Trips labeled as round trips in Metro Bike Share and Indego trip files counted as 2 trips. Trips with no trip time are not counted. For trips starting and ending at a docking station or on systems where only docked trips are permitted, trips with no start station identifier and/or end station id are not counted in totals. Trips shorter than 1 minute or greater than 2 hours excluded. Days aligned to include the same days of weeks in 2019 and 2020. Days included in each month are as follows: Days included in each month can be found in the attachment (https://data.bts.gov/api/views/6cfa-ipzd/files/36fde1b8-57c3-4d31-b9dc-bbc896ba346e?download=true&filename=days_included_in_docked_bikeshare_monthly_summaries.xlsx) Trips beginning on 12/31/2019 but ending on 01/01/2020 not included in totals. Data visualizations available at: https://data.bts.gov/stories/s/Summary-of-Docked-Bikeshare-Trips-by-System-and-Ot/7fgy-2zkf/
Lucror Analytics: Proprietary Hedge Funds Data for Credit Quality & Bond Valuation
At Lucror Analytics, we provide cutting-edge corporate data solutions tailored to fixed income professionals and organizations in the financial sector. Our datasets encompass issuer and issue-level credit quality, bond fair value metrics, and proprietary scores designed to offer nuanced, actionable insights into global bond markets that help you stay ahead of the curve. Covering over 3,300 global issuers and over 80,000 bonds, we empower our clients to make data-driven decisions with confidence and precision.
By leveraging our proprietary C-Score, V-Score , and V-Score I models, which utilize CDS and OAS data, we provide unparalleled granularity in credit analysis and valuation. Whether you are a portfolio manager, credit analyst, or institutional investor, Lucror’s data solutions deliver actionable insights to enhance strategies, identify mispricing opportunities, and assess market trends.
What Makes Lucror’s Hedge Funds Data Unique?
Proprietary Credit and Valuation Models Our proprietary C-Score, V-Score, and V-Score I are designed to provide a deeper understanding of credit quality and bond valuation:
C-Score: A composite score (0-100) reflecting an issuer's credit quality based on market pricing signals such as CDS spreads. Responsive to near-real-time market changes, the C-Score offers granular differentiation within and across credit rating categories, helping investors identify mispricing opportunities.
V-Score: Measures the deviation of an issue’s option-adjusted spread (OAS) from the market fair value, indicating whether a bond is overvalued or undervalued relative to the market.
V-Score I: Similar to the V-Score but benchmarked against industry-specific fair value OAS, offering insights into relative valuation within an industry context.
Comprehensive Global Coverage Our datasets cover over 3,300 issuers and 80,000 bonds across global markets, ensuring 90%+ overlap with prominent IG and HY benchmark indices. This extensive coverage provides valuable insights into issuers across sectors and geographies, enabling users to analyze issuer and market dynamics comprehensively.
Data Customization and Flexibility We recognize that different users have unique requirements. Lucror Analytics offers tailored datasets delivered in customizable formats, frequencies, and levels of granularity, ensuring that our data integrates seamlessly into your workflows.
High-Frequency, High-Quality Data Our C-Score, V-Score, and V-Score I models and metrics are updated daily using end-of-day (EOD) data from S&P. This ensures that users have access to current and accurate information, empowering timely and informed decision-making.
How Is the Data Sourced? Lucror Analytics employs a rigorous methodology to source, structure, transform and process data, ensuring reliability and actionable insights:
Proprietary Models: Our scores are derived from proprietary quant algorithms based on CDS spreads, OAS, and other issuer and bond data.
Global Data Partnerships: Our collaborations with S&P and other reputable data providers ensure comprehensive and accurate datasets.
Data Cleaning and Structuring: Advanced processes ensure data integrity, transforming raw inputs into actionable insights.
Primary Use Cases
Portfolio Construction & Rebalancing Lucror’s C-Score provides a granular view of issuer credit quality, allowing portfolio managers to evaluate risks and identify mispricing opportunities. With CDS-driven insights and daily updates, clients can incorporate near-real-time issuer/bond movements into their credit assessments.
Portfolio Optimization The V-Score and V-Score I allow portfolio managers to identify undervalued or overvalued bonds, supporting strategies that optimize returns relative to credit risk. By benchmarking valuations against market and industry standards, users can uncover potential mean-reversion opportunities and enhance portfolio performance.
Risk Management With data updated daily, Lucror’s models provide dynamic insights into market risks. Organizations can use this data to monitor shifts in credit quality, assess valuation anomalies, and adjust exposure proactively.
Strategic Decision-Making Our comprehensive datasets enable financial institutions to make informed strategic decisions. Whether it’s assessing the fair value of bonds, analyzing industry-specific credit spreads, or understanding broader market trends, Lucror’s data delivers the depth and accuracy required for success.
Why Choose Lucror Analytics for Hedge Funds Data? Lucror Analytics is committed to providing high-quality, actionable data solutions tailored to the evolving needs of the financial sector. Our unique combination of proprietary models, rigorous sourcing of high-quality data, and customizable delivery ensures that users have the insights they need to make smarter dec...
https://data.go.kr/ugs/selectPortalPolicyView.dohttps://data.go.kr/ugs/selectPortalPolicyView.do
The system marginal price refers to the electricity market price (KRW/kWh) for the amount of electricity applied by trading hour, and you can search for the system marginal price information by hour, divided into the mainland and Jeju regions. ã…‡ Note 1: The trading time 0:00 of the API indicates the period starting immediately after 0:00 and ending at 01:00. ã…‡ Note 2: The API will be deleted in the future, and we recommend using the Korea Power Exchange_System Marginal Price and Demand Forecast (for one-day-ahead power generation plan) API. ã…‡ Updated to OPENAPI User Guide v1.5 on 2024.11.29
https://data.go.kr/ugs/selectPortalPolicyView.dohttps://data.go.kr/ugs/selectPortalPolicyView.do
This is an open API type data that provides information inquiry service for the World Youth Volunteer Day. The open data can be used to inquire about participation applications, volunteer list, details of volunteer list, activity plan, activity content, activity evidence, and activity evidence details, and each can check the contents, registration date, volunteer date, number of participants, activity start/end date, activity experience, etc. It helps to comprehensively manage volunteer performance. The number of participants, activity plan, and result information are also useful for identifying regional characteristics of youth volunteer participation. This data can be used to diagnose the level of youth social participation and the effectiveness of service education policies, and can be used to establish and evaluate youth volunteer policies.
Our Price Paid Data includes information on all property sales in England and Wales that are sold for value and are lodged with us for registration.
Get up to date with the permitted use of our Price Paid Data:
check what to consider when using or publishing our Price Paid Data
If you use or publish our Price Paid Data, you must add the following attribution statement:
Contains HM Land Registry data © Crown copyright and database right 2021. This data is licensed under the Open Government Licence v3.0.
Price Paid Data is released under the http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/" class="govuk-link">Open Government Licence (OGL). You need to make sure you understand the terms of the OGL before using the data.
Under the OGL, HM Land Registry permits you to use the Price Paid Data for commercial or non-commercial purposes. However, OGL does not cover the use of third party rights, which we are not authorised to license.
Price Paid Data contains address data processed against Ordnance Survey’s AddressBase Premium product, which incorporates Royal Mail’s PAF® database (Address Data). Royal Mail and Ordnance Survey permit your use of Address Data in the Price Paid Data:
If you want to use the Address Data in any other way, you must contact Royal Mail. Email address.management@royalmail.com.
The following fields comprise the address data included in Price Paid Data:
The May 2025 release includes:
As we will be adding to the April data in future releases, we would not recommend using it in isolation as an indication of market or HM Land Registry activity. When the full dataset is viewed alongside the data we’ve previously published, it adds to the overall picture of market activity.
Your use of Price Paid Data is governed by conditions and by downloading the data you are agreeing to those conditions.
Google Chrome (Chrome 88 onwards) is blocking downloads of our Price Paid Data. Please use another internet browser while we resolve this issue. We apologise for any inconvenience caused.
We update the data on the 20th working day of each month. You can download the:
These include standard and additional price paid data transactions received at HM Land Registry from 1 January 1995 to the most current monthly data.
Your use of Price Paid Data is governed by conditions and by downloading the data you are agreeing to those conditions.
The data is updated monthly and the average size of this file is 3.7 GB, you can download:
Unfortunately, the API this dataset used to pull the stock data isn't free anymore. Instead of having this auto-updating, I dropped the last version of the data files in here, so at least the historic data is still usable.
This dataset provides free end of day data for all stocks currently in the Dow Jones Industrial Average. For each of the 30 components of the index, there is one CSV file named by the stock's symbol (e.g. AAPL for Apple). Each file provides historically adjusted market-wide data (daily, max. 5 years back). See here for description of the columns: https://iextrading.com/developer/docs/#chart
Since this dataset uses remote URLs as files, it is automatically updated daily by the Kaggle platform and automatically represents the latest data.
List of stocks and symbols as per https://en.wikipedia.org/wiki/Dow_Jones_Industrial_Average
Thanks to https://iextrading.com for providing this data for free!
Data provided for free by IEX. View IEX’s Terms of Use.