Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
This datasets specifies the transaction (Buys,Sells,Awards) done between companies and their employess internally. There is no stock exchange data of the public level . As the dataset is collected through public API provided by rapid API platform .
RapidAPI is a comprehensive platform that functions as the world's largest API Hub, serving as a marketplace and management platform for Application Programming Interfaces (APIs). Its primary purpose is to connect developers (API consumers) with a vast array of APIs provided by various developers and companies (API providers).
***key terms :- ***
**Stock Options **:- A stock option is a type of employee benefit that gives you the right to buy company shares at a fixed price usually lower than the market price after a certain period of time.
Restricted Stock Units (RSUs) :- This are the stock grants provided by the company to employees as an compensation which keeps the motivation of the workers high.
Talking about the quality of the dataset , so i had made some filters to their datatype , representation . the size of the dataset is small around 1300 (toy datasets) especially useful and helpful to perform the beginner friendly Exploratory data analysis. There is no primary key for the dataset we need to create an synthetic one .
Columns:-
symbol :- It just only shows the ticker symbol of the company's stock.
symbolName :- Full Name of the company corresponding to the ticker.
fullName :- Name of the company's insider making the transaction.
shortJobTitle:- Position of the insider who is making the stock transaction.
transactionType:- Type of the transaction ---- Buy, sell & Award.
amount :- Number of shares traded in the transaction
reportedPrice:- Current price per share reported for the transaction
usdValue :- Total amount in dollars for the current transaction.
eodHolding :- Insider’s end-of-day holding after the transaction (number of shares remaining).
transactionDate:- Date on which transaction has been done.
symbolCode:- Type of security traded (e.g., STK for stock, UIT for unit trust).
hasOptions:- Indicates if the insider has stock options (Yes/No).
symbolType:- Numeric code representing the type of instrument or classification (often internal or system-defined).
Github Link to source code of data collection through API :- https://github.com/Aryan83699/yahoo-stock-exchange
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
This dataset provides comprehensive stock market data sourced from Polygon API and Finviz. It includes two main components:
Aggregated Data:
Ticker-Specific Sheets:
Tags: #StockMarketData #FinancialData #MinuteWiseData #PolygonAPI #Finviz #StockAnalysis #TradingData #MarketMetrics #StockSymbols #FinancialAnalysis
Facebook
TwitterEDI's history of corporate action events dates back to January 2007 and uses unique Security IDs that can track the history of events by issuer since January 2007.
Choose to receive accurate corporate actions data via an SFTP connection either 4x daily or end-of-day. Proprietary format. ISO 15022 message standard, providing MT564 & 568 announcements.
To support global trading schedules, EDI offers seven daily data feeds at 03:30, 07:00, 09:00, 11:00, 13:00, 15:00, and 17:15 GMT, ensuring continuous access to accurate, market-aligned data.
Facebook
Twitterhttps://data.go.kr/ugs/selectPortalPolicyView.dohttps://data.go.kr/ugs/selectPortalPolicyView.do
Provides hourly grid limit price information and demand forecast values for the mainland and Jeju. Each hour is represented by the end point of that unit period (i.e. trading time 06:00 represents a period starting immediately after 05:00 and ending at 06:00) Data updated once a day, around 20:00 . The existing grid limit price inquiry API will be deleted in the future.
Facebook
TwitterArgus is a prominent source of pricing evaluations and business insights extensively utilized in the energy and commodity sectors, specifically for physical supply agreements and the settlement and clearing of financial derivatives. Argus pricing is also employed as a benchmark in swaps markets, for mark-to-market valuations, project financing, taxation, royalties, and risk management. Argus provides comprehensive services globally and continuously develops new assessments to mirror evolving market dynamics and trends. Covered assets encompass Energy, Oil, Refined Products, Power, Gas, Generation fuels, Petrochemicals, Transport, and Metals.
Facebook
TwitterExtensive and dependable pricing information spanning the entire range of financial markets. Encompassing worldwide coverage from stock exchanges, trading platforms, indicative contributed prices, assessed valuations, expert third-party sources, and our enhanced data offerings. User-friendly request-response, bulk access, and tailored desktop interfaces to meet nearly any organizational or application data need. Worldwide, real-time, delayed streaming, intraday updates, and meticulously curated end-of-day pricing information.
Facebook
Twitterhttps://fred.stlouisfed.org/legal/#copyright-pre-approvalhttps://fred.stlouisfed.org/legal/#copyright-pre-approval
View data of the S&P 500, an index of the stocks of 500 leading companies in the US economy, which provides a gauge of the U.S. equity market.
Facebook
Twitter🟦 What this is Synthetic, lineage-verified OHLC bars computed from decoded DEX swaps and pool states. Each row is a time bucket for a specific pool and token direction (token_in → token_out), with open/high/low/close, volumes, and trade counts.
Key traits • Schema-stable, versioned, audit-ready • Real-time (WSS) and historical/EOD delivery • Verifiable lineage to pools, tokens, swaps/logs
🌐 Chains / Coverage ETH, BSC, Base, Arbitrum, Unichain, Avalanche, Polygon, Celo, Linea, Optimism (others on request). Full history from chain genesis; reorg-aware real-time ingestion and updates. Coverage includes: • Uniswap V2, V3, V4 • Balancer V2, PancakeSwap, Solidly, Maverick, Aerodrome, and others
📑 Schema Columns as delivered (stable names/types): • id BIGINT - surrogate row id (PK) • pool_uid BIGINT NOT NULL - FK → liquidity_pools(uid) Lineage (ids): • tracing_id BYTEA NOT NULL - row identity (proof-of-derivation) • parent_tracing_ids BYTEA NOT NULL - immediate sources (packed hashes) • genesis_tracing_ids BYTEA NOT NULL - ultimate on-chain sources (packed hashes) Lineage (chain position, window anchors): • first_genesis_block_number BIGINT NOT NULL - first event in bucket • first_genesis_tx_index INTEGER NOT NULL • first_genesis_log_index INTEGER NOT NULL • last_genesis_block_number BIGINT NOT NULL - last event in bucket • last_genesis_tx_index INTEGER NOT NULL • last_genesis_log_index INTEGER NOT NULL Bucket definition: • bucket_start TIMESTAMPTZ NOT NULL - inclusive bucket start (UTC) • bucket_seconds INTEGER NOT NULL - one of {60,300,900,1800,3600,14400,86400} for 1m,5m,15m,30m,1h,4h,1d Pair & mid snapshot: • token_in BYTEA NOT NULL - 20B (FK → erc20_tokens) • token_out BYTEA NOT NULL - 20B (FK → erc20_tokens) OHLC (prices are decimals-adjusted; token_out per 1 token_in): • open NUMERIC(78,18) NOT NULL • high NUMERIC(78,18) NOT NULL • low NUMERIC(78,18) NOT NULL • close NUMERIC(78,18) NOT NULL Volumes (token units are decimals-adjusted): • volume_in NUMERIC(78, 18) NOT NULL - sum of amount_in within bucket • volume_out NUMERIC(78, 18) NOT NULL - sum of amount_out within bucket • trades_count BIGINT NOT NULL - swap count in bucket
Notes • Prices are decimals-adjusted (token_out per 1 token_in). • Volumes are decimals-adjusted • Direction is implied by token_in → token_out. For the reverse, a separate row exists with tokens swapped.
🔑 Keys & Joins • Primary key: id • Idempotency: (pool_uid, token_in, token_out, bucket_start, bucket_seconds) • Foreign keys: • pool_uid → liquidity_pools(uid) • token_in/token_out → erc20_tokens(contract_address) • first_genesis_ and last_genesis_ triples → logs(block_number, tx_index, log_index)
🔗 Joins to Dependency Products • Liquidity Pools Catalog (liquidity_pools) - pool metadata (fee tier, type, tokens). • ERC-20 Tokens Catalog (erc20_tokens) - symbol, decimals, names. • Swaps / Logs - provenance checks and drill-downs.
🧬 Lineage & Reproducibility Every bar’s lineage is cryptographically linked to its inputs: • tracing_id - deterministic identity of this OHLC row • parent_tracing_ids - contributing swaps/states used in the bucket • genesis_tracing_ids - ultimate raw on-chain sources Anchors to the first and last events in the bucket enable exact replay and audit.
📈 Common uses • Charting & analytics (1m → 1d); volatility, and signal engineering • Backtesting and factor research with stable, reproducible bars • Routing heuristics and execution scheduling by time of day • Monitoring: liquidity/price regime shifts at multiple horizons
🚚 Delivery By default • WebSocket (WSS) reorg-aware live emissions when a new update is available; <140 ms median latency on ETH streams (7-day). • SFTP server for archives and daily End-of-Day (EOD) snapshots. • Model Context Protocol (MCP) for AI workflows (pull slices, schemas, lineage). Optional • Integrations to Amazon S3, Azure Blob Storage, Snowflake, and other enterprise platforms on request.
🗂️ Files (time-partitioned in UTC, compressed) • Parquet • CSV • XLS • JSON
💡 Quality and operations • Reorg-aware ingestion. • 99.95% uptime target SLA. • Backfills to chain genesis. • Versioned, schema-stable datasets; changes are additive and announced.
🔄 Change policy Schema is stable. Any breaking change ships as a new version (e.g., token_to_token_prices_ohlc_v2) with migration notes. Content updates are additive; types aren’t changed in place.
Facebook
TwitterApproximately 20% of Winnipeg Transit buses are equipped with automated passenger counting sensors at each door that count the number of people entering (boarding) and exiting (alighting) the bus along with the relevant bus stop information.
This data is extrapolated into an estimate of the average daily boardings and alightings, aggregated by route, stop, and time of day, for each day type (weekday, Saturday, Sunday). Data is collected over time ranges corresponding to regular seasonal schedules (September-December, December-April, April-June, June-September), and will be uploaded within 30 days after the end of each seasonal schedule.
The time of day field for weekdays is defined as AM Peak (05:00-09:00), Mid-Day (09:00-15:30), PM Peak (15:30-18:30), Evening (18:30-22:30), and Night (22:30-end of service). For Saturdays and Sundays, it is defined as Morning (05:00-11:00), Afternoon (11:00-19:00), Evening (19:00-22:30), and Night (22:30-end of service).
Due to detection errors and small sample sizes in some cases, boarding numbers may not exactly match alighting numbers. On-request passenger counts are not included in this data set.
More transit data can be found on Winnipeg Transit's Open Data Web Service, located here: https://api.winnipegtransit.com/home/api/v3
Splitgraph serves as an HTTP API that lets you run SQL queries directly on this data to power Web applications. For example:
See the Splitgraph documentation for more information.
Facebook
Twitterhttps://fred.stlouisfed.org/legal/#copyright-citation-requiredhttps://fred.stlouisfed.org/legal/#copyright-citation-required
Graph and download economic data for CBOE Volatility Index: VIX (VIXCLS) from 1990-01-02 to 2025-12-01 about VIX, volatility, stock market, and USA.
Facebook
TwitterNote: Date last updated is 2022-03-17, dataset is no longer provided.
From https://www.alberta.ca/covid-19-alberta-data.aspx; updated 2023-08-29 15:15 with data as of end of day 2023-07-24.
Splitgraph serves as an HTTP API that lets you run SQL queries directly on this data to power Web applications. For example:
See the Splitgraph documentation for more information.
Facebook
TwitterThe datasets are split by census block, cities, counties, districts, provinces, and states. The typical dataset includes the below fields.
Column numbers, Data attribute, Description 1, device_id, hashed anonymized unique id per moving device 2, origin_geoid, geohash id of the origin grid cell 3, destination_geoid, geohash id of the destination grid cell 4, origin_lat, origin latitude with 4-to-5 decimal precision 5, origin_long, origin longitude with 4-to-5 decimal precision 6, destination_lat, destination latitude with 5-to-6 decimal precision 7, destination_lon, destination longitude with 5-to-6 decimal precision 8, start_timestamp, start timestamp / local time 9, end_timestamp, end timestamp / local time 10, origin_shape_zone, customer provided origin shape id, zone or census block id 11, destination_shape_zone, customer provided destination shape id, zone or census block id 12, trip_distance, inferred distance traveled in meters, as the crow flies 13, trip_duration, inferred duration of the trip in seconds 14, trip_speed, inferred speed of the trip in meters per second 15, hour_of_day, hour of day of trip start (0-23) 16, time_period, time period of trip start (morning, afternoon, evening, night) 17, day_of_week, day of week of trip start(mon, tue, wed, thu, fri, sat, sun) 18, year, year of trip start 19, iso_week, iso week of the trip 20, iso_week_start_date, start date of the iso week 21, iso_week_end_date, end date of the iso week 22, travel_mode, mode of travel (walking, driving, bicycling, etc) 23, trip_event, trip or segment events (start, route, end, start-end) 24, trip_id, trip identifier (unique for each batch of results) 25, origin_city_block_id, census block id for the trip origin point 26, destination_city_block_id, census block id for the trip destination point 27, origin_city_block_name, census block name for the trip origin point 28, destination_city_block_name, census block name for the trip destination point 29, trip_scaled_ratio, ratio used to scale up each trip, for example, a trip_scaled_ratio value of 10 means that 1 original trip was scaled up to 10 trips 30, route_geojson, geojson line representing trip route trajectory or geometry
The datasets can be processed and enhanced to also include places, POI visitation patterns, hour-of-day patterns, weekday patterns, weekend patterns, dwell time inferences, and macro movement trends.
The dataset is delivered as gzipped CSV archive files that are uploaded to your AWS s3 bucket upon request.
Facebook
TwitterThe datasets are split by census block, cities, counties, districts, provinces, and states. The typical dataset includes the below fields.
Column numbers, Data attribute, Description 1, device_id, hashed anonymized unique id per moving device 2, origin_geoid, geohash id of the origin grid cell 3, destination_geoid, geohash id of the destination grid cell 4, origin_lat, origin latitude with 4-to-5 decimal precision 5, origin_long, origin longitude with 4-to-5 decimal precision 6, destination_lat, destination latitude with 5-to-6 decimal precision 7, destination_lon, destination longitude with 5-to-6 decimal precision 8, start_timestamp, start timestamp / local time 9, end_timestamp, end timestamp / local time 10, origin_shape_zone, customer provided origin shape id, zone or census block id 11, destination_shape_zone, customer provided destination shape id, zone or census block id 12, trip_distance, inferred distance traveled in meters, as the crow flies 13, trip_duration, inferred duration of the trip in seconds 14, trip_speed, inferred speed of the trip in meters per second 15, hour_of_day, hour of day of trip start (0-23) 16, time_period, time period of trip start (morning, afternoon, evening, night) 17, day_of_week, day of week of trip start(mon, tue, wed, thu, fri, sat, sun) 18, year, year of trip start 19, iso_week, iso week of the trip 20, iso_week_start_date, start date of the iso week 21, iso_week_end_date, end date of the iso week 22, travel_mode, mode of travel (walking, driving, bicycling, etc) 23, trip_event, trip or segment events (start, route, end, start-end) 24, trip_id, trip identifier (unique for each batch of results) 25, origin_city_block_id, census block id for the trip origin point 26, destination_city_block_id, census block id for the trip destination point 27, origin_city_block_name, census block name for the trip origin point 28, destination_city_block_name, census block name for the trip destination point 29, trip_scaled_ratio, ratio used to scale up each trip, for example, a trip_scaled_ratio value of 10 means that 1 original trip was scaled up to 10 trips 30, route_geojson, geojson line representing trip route trajectory or geometry
The datasets can be processed and enhanced to also include places, POI visitation patterns, hour-of-day patterns, weekday patterns, weekend patterns, dwell time inferences, and macro movement trends.
The dataset is delivered as gzipped CSV archive files that are uploaded to your AWS s3 bucket upon request.
Facebook
TwitterThe datasets are split by census block, cities, counties, districts, provinces, and states. The typical dataset includes the below fields.
Column numbers, Data attribute, Description 1, device_id, hashed anonymized unique id per moving device 2, origin_geoid, geohash id of the origin grid cell 3, destination_geoid, geohash id of the destination grid cell 4, origin_lat, origin latitude with 4-to-5 decimal precision 5, origin_long, origin longitude with 4-to-5 decimal precision 6, destination_lat, destination latitude with 5-to-6 decimal precision 7, destination_lon, destination longitude with 5-to-6 decimal precision 8, start_timestamp, start timestamp / local time 9, end_timestamp, end timestamp / local time 10, origin_shape_zone, customer provided origin shape id, zone or census block id 11, destination_shape_zone, customer provided destination shape id, zone or census block id 12, trip_distance, inferred distance traveled in meters, as the crow flies 13, trip_duration, inferred duration of the trip in seconds 14, trip_speed, inferred speed of the trip in meters per second 15, hour_of_day, hour of day of trip start (0-23) 16, time_period, time period of trip start (morning, afternoon, evening, night) 17, day_of_week, day of week of trip start(mon, tue, wed, thu, fri, sat, sun) 18, year, year of trip start 19, iso_week, iso week of the trip 20, iso_week_start_date, start date of the iso week 21, iso_week_end_date, end date of the iso week 22, travel_mode, mode of travel (walking, driving, bicycling, etc) 23, trip_event, trip or segment events (start, route, end, start-end) 24, trip_id, trip identifier (unique for each batch of results) 25, origin_city_block_id, census block id for the trip origin point 26, destination_city_block_id, census block id for the trip destination point 27, origin_city_block_name, census block name for the trip origin point 28, destination_city_block_name, census block name for the trip destination point 29, trip_scaled_ratio, ratio used to scale up each trip, for example, a trip_scaled_ratio value of 10 means that 1 original trip was scaled up to 10 trips 30, route_geojson, geojson line representing trip route trajectory or geometry
The datasets can be processed and enhanced to also include places, POI visitation patterns, hour-of-day patterns, weekday patterns, weekend patterns, dwell time inferences, and macro movement trends.
The dataset is delivered as gzipped CSV archive files that are uploaded to your AWS s3 bucket upon request.
Facebook
TwitterThe datasets are split by census block, cities, counties, districts, provinces, and states. The typical dataset includes the below fields.
Column numbers, Data attribute, Description 1, device_id, hashed anonymized unique id per moving device 2, origin_geoid, geohash id of the origin grid cell 3, destination_geoid, geohash id of the destination grid cell 4, origin_lat, origin latitude with 4-to-5 decimal precision 5, origin_long, origin longitude with 4-to-5 decimal precision 6, destination_lat, destination latitude with 5-to-6 decimal precision 7, destination_lon, destination longitude with 5-to-6 decimal precision 8, start_timestamp, start timestamp / local time 9, end_timestamp, end timestamp / local time 10, origin_shape_zone, customer provided origin shape id, zone or census block id 11, destination_shape_zone, customer provided destination shape id, zone or census block id 12, trip_distance, inferred distance traveled in meters, as the crow flies 13, trip_duration, inferred duration of the trip in seconds 14, trip_speed, inferred speed of the trip in meters per second 15, hour_of_day, hour of day of trip start (0-23) 16, time_period, time period of trip start (morning, afternoon, evening, night) 17, day_of_week, day of week of trip start(mon, tue, wed, thu, fri, sat, sun) 18, year, year of trip start 19, iso_week, iso week of the trip 20, iso_week_start_date, start date of the iso week 21, iso_week_end_date, end date of the iso week 22, travel_mode, mode of travel (walking, driving, bicycling, etc) 23, trip_event, trip or segment events (start, route, end, start-end) 24, trip_id, trip identifier (unique for each batch of results) 25, origin_city_block_id, census block id for the trip origin point 26, destination_city_block_id, census block id for the trip destination point 27, origin_city_block_name, census block name for the trip origin point 28, destination_city_block_name, census block name for the trip destination point 29, trip_scaled_ratio, ratio used to scale up each trip, for example, a trip_scaled_ratio value of 10 means that 1 original trip was scaled up to 10 trips 30, route_geojson, geojson line representing trip route trajectory or geometry
The datasets can be processed and enhanced to also include places, POI visitation patterns, hour-of-day patterns, weekday patterns, weekend patterns, dwell time inferences, and macro movement trends.
The dataset is delivered as gzipped CSV archive files that are uploaded to your AWS s3 bucket upon request.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
This datasets specifies the transaction (Buys,Sells,Awards) done between companies and their employess internally. There is no stock exchange data of the public level . As the dataset is collected through public API provided by rapid API platform .
RapidAPI is a comprehensive platform that functions as the world's largest API Hub, serving as a marketplace and management platform for Application Programming Interfaces (APIs). Its primary purpose is to connect developers (API consumers) with a vast array of APIs provided by various developers and companies (API providers).
***key terms :- ***
**Stock Options **:- A stock option is a type of employee benefit that gives you the right to buy company shares at a fixed price usually lower than the market price after a certain period of time.
Restricted Stock Units (RSUs) :- This are the stock grants provided by the company to employees as an compensation which keeps the motivation of the workers high.
Talking about the quality of the dataset , so i had made some filters to their datatype , representation . the size of the dataset is small around 1300 (toy datasets) especially useful and helpful to perform the beginner friendly Exploratory data analysis. There is no primary key for the dataset we need to create an synthetic one .
Columns:-
symbol :- It just only shows the ticker symbol of the company's stock.
symbolName :- Full Name of the company corresponding to the ticker.
fullName :- Name of the company's insider making the transaction.
shortJobTitle:- Position of the insider who is making the stock transaction.
transactionType:- Type of the transaction ---- Buy, sell & Award.
amount :- Number of shares traded in the transaction
reportedPrice:- Current price per share reported for the transaction
usdValue :- Total amount in dollars for the current transaction.
eodHolding :- Insider’s end-of-day holding after the transaction (number of shares remaining).
transactionDate:- Date on which transaction has been done.
symbolCode:- Type of security traded (e.g., STK for stock, UIT for unit trust).
hasOptions:- Indicates if the insider has stock options (Yes/No).
symbolType:- Numeric code representing the type of instrument or classification (often internal or system-defined).
Github Link to source code of data collection through API :- https://github.com/Aryan83699/yahoo-stock-exchange