100+ datasets found
  1. Dataset for Stock Market Index of 7 Economies

    • kaggle.com
    zip
    Updated Jul 4, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Saad Aziz (2023). Dataset for Stock Market Index of 7 Economies [Dataset]. https://www.kaggle.com/datasets/saadaziz1985/dataset-for-stock-market-index-of-7-countries
    Explore at:
    zip(1917326 bytes)Available download formats
    Dataset updated
    Jul 4, 2023
    Authors
    Saad Aziz
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Context:

    The provided dataset is extracted from yahoo finance using pandas and yahoo finance library in python. This deals with stock market index of the world best economies. The code generated data from Jan 01, 2003 to Jun 30, 2023 that’s more than 20 years. There are 18 CSV files, dataset is generated for 16 different stock market indices comprising of 7 different countries. Below is the list of countries along with number of indices extracted through yahoo finance library, while two CSV files deals with annualized return and compound annual growth rate (CAGR) has been computed from the extracted data.

    Number of Countries & Index:

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F15657145%2F90ce8a986761636e3edbb49464b304d8%2FNumber%20of%20Index.JPG?generation=1688490342207096&alt=media" alt="">

    Content:

    Unit of analysis: Stock Market Index Analysis

    This dataset is useful for research purposes, particularly for conducting comparative analyses involving capital market performance and could be used along with other economic indicators.

    There are 18 distinct CSV files associated with this dataset. First 16 CSV files deals with number of indices and last two CSV file deals with annualized return of each year and CAGR of each index. If data in any column is blank, it portrays that index was launch in later years, for instance: Bse500 (India), this index launch in 2007, so earlier values are blank, similarly China_Top300 index launch in year 2021 so early fields are blank too.

    The extraction process involves applying different criteria, like in 16 CSV files all columns are included, Adj Close is used to calculate annualized return. The algorithm extracts data based on index name (code given by the yahoo finance) according start and end date.

    Annualized return and CAGR has been calculated and illustrated in below image along with machine readable file (CSV) attached to that.

    To extract the data provided in the attachment, various criteria were applied:

    1. Content Filtering: The data was filtered based on several attributes, including the index name, start and end date. This filtering process ensured that only relevant data meeting the specified criteria.

    2. Collaborative Filtering: Another filtering technique used was collaborative filtering using yahoo finance, which relies on index similarity. This approach involves finding indices that are similar to other index or extended dataset scope to other countries or economies. By leveraging this method, the algorithm identifies and extracts data based on similarities between indices.

    In the last two CSV files, one belongs to annualized return, that was calculated based on the Adj close column and new DataFrame created to store its outcome. Below is the image of annualized returns of all index (if unreadable, machine-readable or CSV format is attached with the dataset).

    Annualized Return:

    As far as annualised rate of return is concerned, most of the time India stock market indices leading, followed by USA, Canada and Japan stock market indices.

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F15657145%2F37645bd90623ea79f3708a958013c098%2FAnnualized%20Return.JPG?generation=1688525901452892&alt=media" alt="">

    Compound Annual Growth Rate (CAGR):

    The best performing index based on compound growth is Sensex (India) that comprises of top 30 companies is 15.60%, followed by Nifty500 (India) that is 11.34% and Nasdaq (USA) all is 10.60%.

    The worst performing index is China top300, however this is launch in 2021 (post pandemic), so would not possible to examine at that stage (due to less data availability). Furthermore, UK and Russia indices are also top 5 in the worst order.

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F15657145%2F58ae33f60a8800749f802b46ec1e07e7%2FCAGR.JPG?generation=1688490409606631&alt=media" alt="">

    Geography: Stock Market Index of the World Top Economies

    Time period: Jan 01, 2003 – June 30, 2023

    Variables: Stock Market Index Title, Open, High, Low, Close, Adj Close, Volume, Year, Month, Day, Yearly_Return and CAGR

    File Type: CSV file

    Inspiration:

    • Time series prediction model
    • Investment opportunities in world best economies
    • Comparative Analysis of past data with other stock market indices or other indices

    Disclaimer:

    This is not a financial advice; due diligence is required in each investment decision.

  2. d

    Data from: U-Index, a dataset and an impact metric for informatics tools and...

    • search.dataone.org
    • datasetcatalog.nlm.nih.gov
    • +2more
    Updated Apr 11, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alison Callahan; Rainer Winnenburg; Nigam H. Shah (2025). U-Index, a dataset and an impact metric for informatics tools and databases [Dataset]. http://doi.org/10.5061/dryad.gj651
    Explore at:
    Dataset updated
    Apr 11, 2025
    Dataset provided by
    Dryad Digital Repository
    Authors
    Alison Callahan; Rainer Winnenburg; Nigam H. Shah
    Time period covered
    Feb 22, 2019
    Description

    Measuring the usage of informatics resources such as software tools and databases is essential to quantifying their impact, value and return on investment. We have developed a publicly available dataset of informatics resource publications and their citation network, along with an associated metric (u-Index) to measure informatics resources’ impact over time. Our dataset differentiates the context in which citations occur to distinguish between ‘awareness’ and ‘usage’, and uses a citing universe of open access publications to derive citation counts for quantifying impact. Resources with a high ratio of usage citations to awareness citations are likely to be widely used by others and have a high u-Index score. We have pre-calculated the u-Index for nearly 100,000 informatics resources. We demonstrate how the u-Index can be used to track informatics resource impact over time. The method of calculating the u-Index metric, the pre-computed u-Index values, and the dataset we compiled to calc...

  3. Report on Evaluation of the Interaction-based Hazard Index Formula with Data...

    • catalog.data.gov
    • s.cnmilf.com
    Updated Aug 3, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. EPA Office of Research and Development (ORD) (2024). Report on Evaluation of the Interaction-based Hazard Index Formula with Data on Trihalomethanes [Dataset]. https://catalog.data.gov/dataset/report-on-evaluation-of-the-interaction-based-hazard-index-formula-with-data-on-trihalomet
    Explore at:
    Dataset updated
    Aug 3, 2024
    Dataset provided by
    United States Environmental Protection Agencyhttp://www.epa.gov/
    Description

    The endpoints selected for evaluation of the HIINT formula were percent relative liver weight of mice (PcLiv) and the logarithm of ALT [Log(ALT)], where the log transformation was used to help stabilize the increases in variance with dose found in the ALT dataset.

  4. Stock Market Dataset(NIFTY 50)

    • kaggle.com
    zip
    Updated Oct 22, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bhadra Mohit (2024). Stock Market Dataset(NIFTY 50) [Dataset]. https://www.kaggle.com/datasets/bhadramohit/stock-market-datasetnifty-50
    Explore at:
    zip(3409 bytes)Available download formats
    Dataset updated
    Oct 22, 2024
    Authors
    Bhadra Mohit
    License

    https://cdla.io/sharing-1-0/https://cdla.io/sharing-1-0/

    Description

    Context

    This dataset provides comprehensive historical data for the Nifty 50 Index, including daily open, high, low, close prices, and trade volumes. Spanning the period for Year 2024-2025, it captures market trends across India's leading stock index during a time of significant economic shifts, including the global pandemic and post-recovery phases.

    The NIFTY 50 is a benchmark Indian stock market index that represents the weighted average of 50 of the largest Indian companies listed on the National Stock Exchange. It is one of the two main stock indices used in India, the other being the BSE SENSEX.

    Nifty 50 is owned and managed by NSE Indices (previously known as India Index Services & Products Limited), which is a wholly-owned subsidiary of the NSE Strategic Investment Corporation Limited. NSE Indices had a marketing and licensing agreement with Standard & Poor's for co-branding equity indices until 2013. The Nifty 50 index was launched on 22 April 1996 and is one of the many stock indices of Nifty.

    Data can be useful for trend analysis, volatility studies, and investment strategy development for both long-term and short-term market assessments.

    The NIFTY 50 index is a free-float market capitalization weighted index. The index was initially calculated on a full market capitalization methodology. On 26 June 2009, the computation was changed to a free-float methodology. The base period for the NIFTY 50 index is 3 November 1995, which marked the completion of one year of operations of the National Stock Exchange Equity Market Segment. The base value of the index has been set at 1000 and a base capital of ₹ 2.06 trillion.

  5. d

    Human Development Index (HDI)

    • data.gov.tw
    csv
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Directorate General of Budget, Accounting and Statistics, Executive Yuan, R.O.C., Human Development Index (HDI) [Dataset]. https://data.gov.tw/en/datasets/25711
    Explore at:
    csvAvailable download formats
    Dataset authored and provided by
    Directorate General of Budget, Accounting and Statistics, Executive Yuan, R.O.C.
    License

    https://data.gov.tw/licensehttps://data.gov.tw/license

    Description

    (1) The Human Development Index (HDI) is compiled by the United Nations Development Programme (UNDP) to measure a country's comprehensive development in the areas of health, education, and economy according to the UNDP's calculation formula.(2) Explanation: (1) The HDI value ranges from 0 to 1, with higher values being better. (2) Due to our country's non-membership in the United Nations and its special international situation, the index is calculated by our department according to the UNDP formula using our country's data. The calculation of the comprehensive index for each year is mainly based on the data of various indicators adopted by the UNDP. (3) In order to have the same baseline for international comparison, the comprehensive index and rankings are not retroactively adjusted after being published.(3) Notes: (1) The old indicators included life expectancy at birth, adult literacy rate, gross enrollment ratio, and average annual income per person calculated by purchasing power parity. (2) The indicators were updated to include life expectancy at birth, mean years of schooling, expected years of schooling, and nominal gross national income (GNI) calculated by purchasing power parity. Starting in 2011, the GNI per capita was adjusted from nominal value to real value to exclude the impact of price changes. Additionally, the HDI calculation method has changed from arithmetic mean to geometric mean. (3) The calculation method for indicators in the education domain changed from geometric mean to simple average due to retrospective adjustments in the 2014 Human Development Report for the years 2005, 2008, and 2010-2012. Since 2016, the education domain has adopted data compiled by the Ministry of Education according to definitions from the United Nations Educational, Scientific and Cultural Organization (UNESCO) and the Organization for Economic Co-operation and Development (OECD).

  6. d

    3.27 Traffic Delay Reduction (summary)

    • catalog.data.gov
    • data.tempe.gov
    • +8more
    Updated Sep 7, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    City of Tempe (2025). 3.27 Traffic Delay Reduction (summary) [Dataset]. https://catalog.data.gov/dataset/3-27-traffic-delay-reduction-summary-3d3ad
    Explore at:
    Dataset updated
    Sep 7, 2025
    Dataset provided by
    City of Tempe
    Description

    The city is using Travel Time Index as a measure to quantify traffic delay in the city. The Travel Time Index is the ratio of the travel time during the peak period to the time required to make the same trip at free-flow speeds. It should be noted that this data is subject to seasonal variations. The 2020 Q2 and Q3 data includes the summer months when traffic volumes are lower, thus the Travel Time Index is improved in these quarters. The performance measure page is available at 3.27 Traffic Delay Reduction. Additional Information Source: Bluetooth ARID sensors Contact (author): Cathy Hollow Contact E-Mail (author): catherine_hollow@tempe.gov Contact (maintainer): Contact E-Mail (maintainer): Data Source Type: Table, CSV Preparation Method: Peak period data is manually extracted. The travel time index calculation is the peak period data divided by the free flow data (constant per segment). Publish Frequency: Quarterly Publish Method: Manual Data Dictionary

  7. e

    Producer price indices for services; index 2006 = 100; 2002Q4 - 2011Q4

    • data.europa.eu
    • data.overheid.nl
    • +2more
    atom feed, json
    Updated Oct 30, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2021). Producer price indices for services; index 2006 = 100; 2002Q4 - 2011Q4 [Dataset]. https://data.europa.eu/data/datasets/4582-producer-price-indices-for-services-index-2006-100-2002q4-2011q4?locale=hu
    Explore at:
    atom feed, jsonAvailable download formats
    Dataset updated
    Oct 30, 2021
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This table shows the price-indices, the quarterly and the yearly price changes of services that companies provided. There is a breakdown by type of services according to the Coordinated European goods and services classification (CPA). The prices of services are observed in the sectors for which the supply of the service is the main activity.

    Included in the producer price indices are: Section I, transport, storage and communication services; Section K, real estate, renting and business services

    Not included in producer price indices are: Section G, wholesale and retail trade, repair of motor vehicles and motorcycles; Section H, hotels and restaurants; Section J, financial services.

    The index reference year of all producer price indices is 2006. The year average, the quarterly and the yearly changes are calculated with unrounded figures.

    Data available form: 2002 4th quarter Frequency: quarterly

    Status of the figures: the figures for the most recent period are final.

    When will new figures be published: This table is put a stop on 30-6-2012 and continued as the table Price indices services; index 2010 = 100'.

    Changes in comparison with last versions From the third quarter of 2010 onwards, a new method is used to calculate Total renting services of automobiles, which falls under the aggregated Renting services of machinery and equipment without operator and of personal and household goods. This method corresponds to the current calculation method of the services price index.

  8. Mapping Uncertainty Due to Missing Data in the Global Ocean Health Index

    • plos.figshare.com
    tiff
    Updated Jun 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Melanie Frazier; Catherine Longo; Benjamin S. Halpern (2023). Mapping Uncertainty Due to Missing Data in the Global Ocean Health Index [Dataset]. http://doi.org/10.1371/journal.pone.0160377
    Explore at:
    tiffAvailable download formats
    Dataset updated
    Jun 2, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Melanie Frazier; Catherine Longo; Benjamin S. Halpern
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Indicators are increasingly used to measure environmental systems; however, they are often criticized for failing to measure and describe uncertainty. Uncertainty is particularly difficult to evaluate and communicate in the case of composite indicators which aggregate many indicators of ecosystem condition. One of the ongoing goals of the Ocean Health Index (OHI) has been to improve our approach to dealing with missing data, which is a major source of uncertainty. Here we: (1) quantify the potential influence of gapfilled data on index scores from the 2015 global OHI assessment; (2) develop effective methods of tracking, quantifying, and communicating this information; and (3) provide general guidance for implementing gapfilling procedures for existing and emerging indicators, including regional OHI assessments. For the overall OHI global index score, the percent contribution of gapfilled data was relatively small (18.5%); however, it varied substantially among regions and goals. In general, smaller territorial jurisdictions and the food provision and tourism and recreation goals required the most gapfilling. We found the best approach for managing gapfilled data was to mirror the general framework used to organize, calculate, and communicate the Index data and scores. Quantifying gapfilling provides a measure of the reliability of the scores for different regions and components of an indicator. Importantly, this information highlights the importance of the underlying datasets used to calculate composite indicators and can inform and incentivize future data collection.

  9. f

    Data from: Quality of life indices: how robust are the results considering...

    • tandf.figshare.com
    pdf
    Updated Dec 15, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Karel Macků; Radek Barvíř (2023). Quality of life indices: how robust are the results considering different aggregation techniques? [Dataset]. http://doi.org/10.6084/m9.figshare.21222379.v1
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Dec 15, 2023
    Dataset provided by
    Taylor & Francis
    Authors
    Karel Macků; Radek Barvíř
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The quality of life has been an attractive topic for several decades, and it has received attention in the scientific, political and public spheres. However, in a growing number of studies aimed at assessing the quality of life, inconsistencies persist in the definition, theoretical underpinnings and in approaches to assessing the quality of life. This study aims to compare the results of different methods of aggregating quality of life indicators into a synthetic index. The synthesis of individual sub-indices results in a final quality of life index and a typology which describes the variability arising from using different index calculation methods. The individual approaches to the calculations confirm the partial robustness of the results which, at the same time, can be an inspiration for a range of tasks where the parallel use of different methods reveals interesting internal relationships in the analysed data.

  10. Z

    Data from: A New Bayesian Approach to Increase Measurement Accuracy Using a...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Feb 25, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Domjan, Peter; Angyal, Viola; Bertalan, Adam; Vingender, Istvan; Dinya, Elek (2025). A New Bayesian Approach to Increase Measurement Accuracy Using a Precision Entropy Indicator [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_14417120
    Explore at:
    Dataset updated
    Feb 25, 2025
    Dataset provided by
    Semmelweis University
    Authors
    Domjan, Peter; Angyal, Viola; Bertalan, Adam; Vingender, Istvan; Dinya, Elek
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    "We believe that by accounting for the inherent uncertainty in the system during each measurement, the relationship between cause and effect can be assessed more accurately, potentially reducing the duration of research."

    Short description

    This dataset was created as part of a research project investigating the efficiency and learning mechanisms of a Bayesian adaptive search algorithm supported by the Imprecision Entropy Indicator (IEI) as a novel method. It includes detailed statistical results, posterior probability values, and the weighted averages of IEI across multiple simulations aimed at target localization within a defined spatial environment. Control experiments, including random search, random walk, and genetic algorithm-based approaches, were also performed to benchmark the system's performance and validate its reliability.

    The task involved locating a target area centered at (100; 100) within a radius of 10 units (Research_area.png), inside a circular search space with a radius of 100 units. The search process continued until 1,000 successful target hits were achieved.

    To benchmark the algorithm's performance and validate its reliability, control experiments were conducted using alternative search strategies, including random search, random walk, and genetic algorithm-based approaches. These control datasets serve as baselines, enabling comprehensive comparisons of efficiency, randomness, and convergence behavior across search methods, thereby demonstrating the effectiveness of our novel approach.

    Uploaded files

    The first dataset contains the average IEI values, generated by randomly simulating 300 x 1 hits for 10 bins per quadrant (4 quadrants in total) using the Python programming language, and calculating the corresponding IEI values. This resulted in a total of 4 x 10 x 300 x 1 = 12,000 data points. The summary of the IEI values by quadrant and bin is provided in the file results_1_300.csv. The calculation of IEI values for averages is based on likelihood, using an absolute difference-based approach for the likelihood probability computation. IEI_Likelihood_Based_Data.zip

    The weighted IEI average values for likelihood calculation (Bayes formula) are provided in the file Weighted_IEI_Average_08_01_2025.xlsx

    This dataset contains the results of a simulated target search experiment using Bayesian posterior updates and Imprecision Entropy Indicators (IEI). Each row represents a hit during the search process, including metrics such as Shannon entropy (H), Gini index (G), average distance, angular deviation, and calculated IEI values. The dataset also includes bin-specific posterior probability updates and likelihood calculations for each iteration. The simulation explores adaptive learning and posterior penalization strategies to optimize the search efficiency. Our Bayesian adaptive searching system source code (search algorithm, 1000 target searches): IEI_Self_Learning_08_01_2025.pyThis dataset contains the results of 1,000 iterations of a successful target search simulation. The simulation runs until the target is successfully located for each iteration. The dataset includes further three main outputs: a) Results files (results{iteration_number}.csv): Details of each hit during the search process, including entropy measures, Gini index, average distance and angle, Imprecision Entropy Indicators (IEI), coordinates, and the bin number of the hit. b) Posterior updates (Pbin_all_steps_{iter_number}.csv): Tracks the posterior probability updates for all bins during the search process acrosations multiple steps. c) Likelihoodanalysis(likelihood_analysis_{iteration_number}.csv): Contains the calculated likelihood values for each bin at every step, based on the difference between the measured IEI and pre-defined IE bin averages. IEI_Self_Learning_08_01_2025.py

    Based on the mentioned Python source code (see point 3, Bayesian adaptive searching method with IEI values), we performed 1,000 successful target searches, and the outputs were saved in the:Self_learning_model_test_output.zip file.

    Bayesian Search (IEI) from different quadrant. This dataset contains the results of Bayesian adaptive target search simulations, including various outputs that represent the performance and analysis of the search algorithm. The dataset includes: a) Heatmaps (Heatmap_I_Quadrant, Heatmap_II_Quadrant, Heatmap_III_Quadrant, Heatmap_IV_Quadrant): These heatmaps represent the search results and the paths taken from each quadrant during the simulations. They indicate how frequently the system selected each bin during the search process. b) Posterior Distributions (All_posteriors, Probability_distribution_posteriors_values, CDF_posteriors_values): Generated based on posterior values, these files track the posterior probability updates, including cumulative distribution functions (CDF) and probability distributions. c) Macro Summary (summary_csv_macro): This file aggregates metrics and key statistics from the simulation. It summarizes the results from the individual results.csv files. d) Heatmap Searching Method Documentation (Bayesian_Heatmap_Searching_Method_05_12_2024): This document visualizes the search algorithm's path, showing how frequently each bin was selected during the 1,000 successful target searches. e) One-Way ANOVA Analysis (Anova_analyze_dataset, One_way_Anova_analysis_results): This includes the database and SPSS calculations used to examine whether the starting quadrant influences the number of search steps required. The analysis was conducted at a 5% significance level, followed by a Games-Howell post hoc test [43] to identify which target-surrounding quadrants differed significantly in terms of the number of search steps. Results were saved in the Self_learning_model_test_results.zip

    This dataset contains randomly generated sequences of bin selections (1-40) from a control search algorithm (random search) used to benchmark the performance of Bayesian-based methods. The process iteratively generates random numbers until a stopping condition is met (reaching target bins 1, 11, 21, or 31). This dataset serves as a baseline for analyzing the efficiency, randomness, and convergence of non-adaptive search strategies. The dataset includes the following: a) The Python source code of the random search algorithm. b) A file (summary_random_search.csv) containing the results of 1000 successful target hits. c) A heatmap visualizing the frequency of search steps for each bin, providing insight into the distribution of steps across the bins. Random_search.zip

    This dataset contains the results of a random walk search algorithm, designed as a control mechanism to benchmark adaptive search strategies (Bayesian-based methods). The random walk operates within a defined space of 40 bins, where each bin has a set of neighboring bins. The search begins from a randomly chosen starting bin and proceeds iteratively, moving to a randomly selected neighboring bin, until one of the stopping conditions is met (bins 1, 11, 21, or 31). The dataset provides detailed records of 1,000 random walk iterations, with the following key components: a) Individual Iteration Results: Each iteration's search path is saved in a separate CSV file (random_walk_results_.csv), listing the sequence of steps taken and the corresponding bin at each step. b) Summary File: A combined summary of all iterations is available in random_walk_results_summary.csv, which aggregates the step-by-step data for all 1,000 random walks. c) Heatmap Visualization: A heatmap file is included to illustrate the frequency distribution of steps across bins, highlighting the relative visit frequencies of each bin during the random walks. d) Python Source Code: The Python script used to generate the random walk dataset is provided, allowing reproducibility and customization for further experiments. Random_walk.zip

    This dataset contains the results of a genetic search algorithm implemented as a control method to benchmark adaptive Bayesian-based search strategies. The algorithm operates in a 40-bin search space with predefined target bins (1, 11, 21, 31) and evolves solutions through random initialization, selection, crossover, and mutation over 1000 successful runs. Dataset Components: a) Run Results: Individual run data is stored in separate files (genetic_algorithm_run_.csv), detailing: Generation: The generation number. Fitness: The fitness score of the solution. Steps: The path length in bins. Solution: The sequence of bins visited. b) Summary File: summary.csv consolidates the best solutions from all runs, including their fitness scores, path lengths, and sequences. c) All Steps File: summary_all_steps.csv records all bins visited during the runs for distribution analysis. d) A heatmap was also generated for the genetic search algorithm, illustrating the frequency of bins chosen during the search process as a representation of the search pathways.Genetic_search_algorithm.zip

    Technical Information

    The dataset files have been compressed into a standard ZIP archive using Total Commander (version 9.50). The ZIP format ensures compatibility across various operating systems and tools.

    The XLSX files were created using Microsoft Excel Standard 2019 (Version 1808, Build 10416.20027)

    The Python program was developed using Visual Studio Code (Version 1.96.2, user setup), with the following environment details: Commit fabd6a6b30b49f79a7aba0f2ad9df9b399473380f, built on 2024-12-19. The Electron version is 32.6, and the runtime environment includes Chromium 128.0.6263.186, Node.js 20.18.1, and V8 12.8.374.38-electron.0. The operating system is Windows NT x64 10.0.19045.

    The statistical analysis included in this dataset was partially conducted using IBM SPSS Statistics, Version 29.0.1.0

    The CSV files in this dataset were created following European standards, using a semicolon (;) as the delimiter instead of a comma, encoded in UTF-8 to ensure compatibility with a wide

  11. E

    Historic Gridded Standardised Precipitation Index for the United Kingdom...

    • catalogue.ceh.ac.uk
    • hosted-metadata.bgs.ac.uk
    • +3more
    text/directory
    Updated Oct 11, 2017
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    M. Tanguy; M. Fry; C. Svensson; J. Hannaford (2017). Historic Gridded Standardised Precipitation Index for the United Kingdom 1862-2015 (generated using gamma distribution with standard period 1961-2010) v4 [Dataset]. http://doi.org/10.5285/233090b2-1d14-4eb9-9f9c-3923ea2350ff
    Explore at:
    text/directoryAvailable download formats
    Dataset updated
    Oct 11, 2017
    Dataset provided by
    NERC EDS Environmental Information Data Centre
    Authors
    M. Tanguy; M. Fry; C. Svensson; J. Hannaford
    License

    https://eidc.ceh.ac.uk/licences/historic-SPI/plainhttps://eidc.ceh.ac.uk/licences/historic-SPI/plain

    Time period covered
    Jan 1, 1862 - Dec 31, 2015
    Area covered
    Description

    5km gridded Standardised Precipitation Index (SPI) data for Great Britain, which is a drought index based on the probability of precipitation for a given accumulation period as defined by McKee et al [1]. There are seven accumulation periods: 1, 3, 6, 9, 12, 18, 24 months and for each period SPI is calculated for each of the twelve calendar months. Note that values in monthly (and for longer accumulation periods also annual) time series of the data therefore are likely to be autocorrelated. The standard period which was used to fit the gamma distribution is 1961-2010. The dataset covers the period from 1862 to 2015. This version supersedes previous versions (version 2 and 3) of the same dataset due to minor errors in the data files. NOTE: the difference between this dataset with the previously published dataset 'Gridded Standardized Precipitation Index (SPI) using gamma distribution with standard period 1961-2010 for Great Britain [SPIgamma61-10]' (Tanguy et al., 2015; https://doi.org/10.5285/94c9eaa3-a178-4de4-8905-dbfab03b69a0) , apart from the temporal and spatial extent, is the underlying rainfall data from which SPI was calculated. In the previously published dataset, CEH-GEAR (Tanguy et al., 2014; https://doi.org/10.5285/5dc179dc-f692-49ba-9326-a6893a503f6e) was used, whereas in this new version, Met Office 5km rainfall grids were used (see supporting information for more details). The methodology to calculate SPI is the same in the two datasets. [1] McKee, T. B., Doesken, N. J., Kleist, J. (1993). The Relationship of Drought Frequency and Duration to Time Scales. Eighth Conference on Applied Climatology, 17-22 January 1993, Anaheim, California.

  12. Services producer price index (SPPI); index 2021=100

    • data.overheid.nl
    • cbs.nl
    atom, json
    Updated Nov 14, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Centraal Bureau voor de Statistiek (Rijk) (2025). Services producer price index (SPPI); index 2021=100 [Dataset]. https://data.overheid.nl/dataset/46509-services-producer-price-index--sppi---index-2021-100
    Explore at:
    atom(KB), json(KB)Available download formats
    Dataset updated
    Nov 14, 2025
    Dataset provided by
    Centraal Bureau voor de Statistiek
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This table shows the price indices, quarterly and yearly changes in prices of services that companies provide. The figures are broken down by type of services according to the Classification of Products by Activity (CPA 2015 version 2.1). For some services, a further breakdown has been made on the basis of market data that differ from the CPA. This breakdown is indicated with a letter after the CPA-code.

    The base year for all Services producer price indices is 2021. The year average, quarterly and yearly changes are calculated with unrounded figures.

    Data available from: 4th quarter 2002.

    Status of the figures: The figures for the most recent quarter are provisional. These figures are made definite in the publication for the subsequent quarter.

    Changes as of November 14 2025: The provisional figures of the 3rd quarter 2025 are published for approximately half of the branches. All previous figures are made definite. For all other branches the figures of the 3rd quarter 2025 are available at a later date.

    When will new figures be published? New figures are available twice per quarter. Halfway each quarter, the results of the pricing method Model pricing (around half of the branches) are published and the other branches with the Unit value method follow at the end of the quarter. This concerns the price development of the previous quarter. The Services producer price index of the total commercial services is also calculated and published at the end of each quarter.

    The Services producer price indices publication schedule can be downloaded as an Excel file under section: 3 Relevant articles. More information about the pricing method can be found in the video under section: 3 Relevant articles.

  13. ALGO TRADING DATA - Nifty 500 intraday data (2025)

    • kaggle.com
    zip
    Updated Aug 6, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Deba (2025). ALGO TRADING DATA - Nifty 500 intraday data (2025) [Dataset]. https://www.kaggle.com/datasets/debashis74017/algo-trading-data-nifty-100-data-with-indicators
    Explore at:
    zip(3870923437 bytes)Available download formats
    Dataset updated
    Aug 6, 2025
    Authors
    Deba
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Last Update - 9th FEB 2025

    Disclaimer!!! Data uploaded here are collected from the internet and some google drive. The sole purposes of uploading these data are to provide this Kaggle community with a good source of data for analysis and research. I don't own these datasets and am also not responsible for them legally by any means. I am not charging anything (either money or any favor) for this dataset. RESEARCH PURPOSE ONLY

    THIS IS THE LARGEST DATASET ON NIFTY 100 STOCKS WITH EACH MINUTES AND DAILY DATA (2015 to 2025)

    The NIFTY 50 is a benchmark Indian stock market index that represents the weighted average of 50 of the largest Indian companies listed on the National Stock Exchange. It is one of the two main stock indices used in India, the other being the BSE SENSEX.

    Nifty 50 is owned and managed by NSE Indices (previously known as India Index Services & Products Limited), which is a wholly-owned subsidiary of the NSE Strategic Investment Corporation Limited.NSE Indices had a marketing and licensing agreement with Standard & Poor's for co-branding equity indices until 2013. The Nifty 50 index was launched on 22 April 1996, and is one of the many stock indices of Nifty.

    The NIFTY 50 index is a free-float market capitalization-weighted index. The index was initially calculated on a full market capitalization methodology. On 26 June 2009, the computation was changed to a free-float methodology. The base period for the NIFTY 50 index is 3 November 1995, which marked the completion of one year of operations of the National Stock Exchange Equity Market Segment. The base value of the index has been set at 1000 and a base capital of ₹ 2.06 trillion.

    Content This dataset contains Nifty 100 historical daily prices. The historical data are retrieved from the NSE India website. Each stock in this Nifty 500 and are of 1 minute itraday data.

    Every dataset contains the following fields. Open - Open price of the stock High - High price of the stock Low - Low price of the stock Close - Close price of the stock Volume - Volume traded of the stock in this time frame

    Inspiration

    • Data is uploaded for Research and Educational purposes.
    • The data scientists and researchers can download and can build EDA, find Correlations, and perform Regression analysis on it.
    • Quant researchers can build strategies and backtest their strategies with this dataset.

    Stock Names

    | ACC | ADANIENT | ADANIGREEN | ADANIPORTS | AMBUJACEM | | -- | -- | -- | -- | -- | | APOLLOHOSP | ASIANPAINT | AUROPHARMA | AXISBANK | BAJAJ-AUTO | | BAJAJFINSV | BAJAJHLDNG | BAJFINANCE | BANDHANBNK | BANKBARODA | | BERGEPAINT | BHARTIARTL | BIOCON | BOSCHLTD | BPCL | | BRITANNIA | CADILAHC | CHOLAFIN | CIPLA | COALINDIA | | COLPAL | DABUR | DIVISLAB | DLF | DMART | | DRREDDY | EICHERMOT | GAIL | GLAND | GODREJCP | | GRASIM | HAVELLS | HCLTECH | HDFC | HDFCAMC | | HDFCBANK | HDFCLIFE | HEROMOTOCO | HINDALCO | HINDPETRO | | HINDUNILVR | ICICIBANK | ICICIGI | ICICIPRULI | IGL | | INDIGO | INDUSINDBK | INDUSTOWER | INFY | IOC | | ITC | JINDALSTEL | JSWSTEEL | JUBLFOOD | KOTAKBANK | | LICI | LT | LTI | LUPIN | M&M | | MARICO | MARUTI | MCDOWELL-N | MUTHOOTFIN | NAUKRI | | NESTLEIND | NIFTY 50 | NIFTY BANK | NMDC | NTPC | | ONGC | PEL | PGHH | PIDILITIND | PIIND | | PNB | POWERGRID | RELIANCE | SAIL | SBICARD | | SBILIFE | SBIN | SHREECEM | SIEMENS | SUNPHARMA | | TATACONSUM | TATAMOTORS | TATASTEEL | TCS | TECHM | | TITAN | TORNTPHARM | ULTRACEMCO | UPL | VEDL | | WIPRO | YESBANK | | | |

  14. i

    Sample Survey on Price Statistics (Producer Price Index and Agriculture...

    • catalog.ihsn.org
    Updated Mar 29, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Statistical Service (2019). Sample Survey on Price Statistics (Producer Price Index and Agriculture Price Index) 2007 - Armenia [Dataset]. https://catalog.ihsn.org/catalog/288
    Explore at:
    Dataset updated
    Mar 29, 2019
    Dataset authored and provided by
    National Statistical Service
    Time period covered
    2007
    Area covered
    Armenia
    Description

    Abstract

    Transition to free economic structure and, as a consequence, processes of privatization of large agricultural and industrial organizations and birth of numerous new economic entities led to significant changes in quantitative and qualitative characteristics of industrial organizations and peasant farms in RA. During the last decade and especially the last 4-5 years, the structural changes, in their turn, caused also certain complications in the mentioned fields in terms of ensuring collection, comprehensiveness and reliability of statistical data on prices and pricing.

    In particular, in case of radical structural changes, international recommendations require the weights upon which price indexes are based to be periodically updated. In order to have a real picture and dynamics of the present situation on creation of indicators for new base year, i.e. collection of information on set of goods-representatives, their weights, average annual prices, prices and price changes, it would be necessary to periodically conduct sample surveys for further improvement of the methodology for price index calculation.

    The objectives of the survey were: • to improve the sample, develop a new sample, • to revise the base year and weights, • to receive additional information on prices of sales of industrial, agricultural product and purchase (acquisition of production means) in RA, • to improve methodology for price observation and calculation of price indexes (survey technology, price and other necessary data collection, processing, analyzing), • to revise the base year for producer price indexes, components structure, shares, calculation mechanism, etc., • to derive price indexes that would be in line with the international definitions, standards and classifications, • to complement the NSS RA price indexes database and create preconditions for its regular updating, • to update the information on economic units covered by price indexes calculation, • to ensure use of international standards and classifications in statistics, • to form preconditions for extension of sample observation mechanisms in the state statistics.

    Besides the above mentioned, the need of the given survey was also stipulated by the following reasons: - a great mobility of micro-sized, small and medium-sized organizations mainly caused by increased speed of their births, activity and produced commodity changes or deaths that decreases the opportunity to create long-term fixed-base time series of prices and price indexes, - According to the CPA classification coding and recoding activities related to the introduction of Armenian classification of economic activities - NACE (based on the European Communities’ NACE classification).

    Geographic coverage

    National

    Kind of data

    Sample survey data [ssd]

    Sampling procedure

    SAMPLE DESIGN

    Agriculture The sample of the survey was desighned in the conditions of lack of farm register. The number of peasant farms was calculated and derived by database analysis. The number of villages (quotas) selected from each marz was determined taking into account the percent of rural population of marzes. The villages from marz were selected randomly. The peasant farms covered by the survey were selected based on number of privatized plots. The survey was carried out in 200 rural communities selected from 10 marzes, in 5-20 households from each community. Pilot survey was conducted with 1 901 farms in the sample.

    Industry The sample frame for the survey was designed as follows: 1. The industrial organizations with share 5 and more percent have been selected by reduction method from fifth level (each subsection) of NACE for whole RA industry. 476 out of 2231 industrial organizations covered by statistical observation were selected for pilot survey.

    1. 70 organizations suggested by Industry statistics division of NSS RA and 70 organizations included in state observations on prices conducted previously by the NSS RA (in all 140 organizations), which are considered important and representative for price observation and excluded from the above-mentioned sample, were separated from the general population. These organizations have also been included in sample population of the pilot survey. As it became obvious from further work, the sample covered both the large and medium-sized and the small and micro-sized organizations, which ensured the representativeness of separate branches of industry and organizations by size. As a result, given by the objective of the survey, as well as available financial constraints, the sample population of the pilot survey comprised 616 industrial organizations, the volumes of produced production of whichaccording to the data for January-October of 2006 comprised more than 86% of total volume of RA industrial production. 165 (92.7%) out of 178 classes of NACE were covered by the sample.

    Mode of data collection

    Face-to-face [f2f]

  15. d

    Rate of return and risk of german stock investments and annuity bonds 1870...

    • da-ra.de
    Updated 2009
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Markus Marowietz (2009). Rate of return and risk of german stock investments and annuity bonds 1870 to 1992 [Dataset]. http://doi.org/10.4232/1.8384
    Explore at:
    Dataset updated
    2009
    Dataset provided by
    da|ra
    GESIS Data Archive
    Authors
    Markus Marowietz
    Time period covered
    1870 - 1992
    Description

    Sources:

    German Central Bank (ed.), 1975: Deutsches Geld- und Bankwesen in Zahlen 1876 – 1975. (German monetary system and banking system in numbers 1876 – 1975) German Central Bank (ed.), different years: monthly reports of the German Central Bank, statistical part, interest rates German Central Bank (ed.), different years: Supplementary statistical booklets for the monthly reports of the German Central Bank 1959 – 1992, security statistics Reich Statistical Office (ed.), different years: Statistical yearbook of the German empire Statistical Office (ed.), 1985: Geld und Kredit. Index der Aktienkurse (Money and Credit. Index of share prices) – Lange Reihe; Fachserie 9, Reihe 2. Statistical Office (ed.), 1987: Entwicklung der Nahrungsmittelpreise von 1800 – 1880 in Deutschland. (Development of food prices in Germany 1800 – 1880) Statistical Office (ed.), 1987: Entwicklung der Verbraucherpreise (Development of consumer prices) seit 1881 in Deutschland. (Development of consumer prices since 1881 in Germany) Statistical Office (ed.), different years: Fachserie 17, Reihe 7, Preisindex für die Lebenshaltung (price index for costs of living) Donner, 1934: Kursbildung am Aktienmarkt; Grundlagen zur Konjunkturbeobachtung an den Effektenmärkten. (Prices on the stock market; groundwork for observation of economic cycles on the stock market) Homburger, 1905: Die Entwicklung des Zinsfusses in Deutschland von 1870 – 1903. (Development of the interest flow in Germany, 1870 – 1903) Voye, 1902: Über die Höhe der verschiedenen Zinsarten und ihre wechselseitige Abhängigkeit.(On the values of different types of interests and their interdependence).

  16. T

    Leaf area index (LAI) dataset of Tibetan Plateau (1982-2015)

    • data.tpdc.ac.cn
    • tpdc.ac.cn
    • +1more
    zip
    Updated Jan 3, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Chi CHEN (2021). Leaf area index (LAI) dataset of Tibetan Plateau (1982-2015) [Dataset]. http://doi.org/10.11888/Ecolo.tpdc.271036
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 3, 2021
    Dataset provided by
    TPDC
    Authors
    Chi CHEN
    Area covered
    Description

    The data set is based on the Lai 3g calculated by GIMMS AVHRR sensor, which represents the greenness of vegetation. The data is from Chen et al. (2019), and the specific calculation method is shown in the article. The source data range is global, and Tibetan plateau region is selected in this data set. This data integrates the original semi monthly scale data into the monthly data, and the processing method is to take the maximum value of two periods of Lai in a month, so as to achieve the effect of removing noise as much as possible. This data set is one of the most widely used Lai data, and is often used to evaluate the temporal and spatial patterns of vegetation greenness, which has practical significance and theoretical value.

  17. Drought and Moisture Surplus for the Conterminous United States, Annual Data...

    • catalog.data.gov
    • colorado-river-portal.usgs.gov
    • +11more
    Updated Nov 14, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Forest Service (2025). Drought and Moisture Surplus for the Conterminous United States, Annual Data 1-Year Windows (Image Service) [Dataset]. https://catalog.data.gov/dataset/drought-and-moisture-surplus-for-the-conterminous-united-states-annual-data-1-year-windows-6243b
    Explore at:
    Dataset updated
    Nov 14, 2025
    Dataset provided by
    U.S. Department of Agriculture Forest Servicehttp://fs.fed.us/
    Area covered
    Contiguous United States, United States
    Description

    The Moisture Deficit and Surplus map uses moisture difference z-score (MDZ) datasets developed by scientists Frank Koch, John Coulston, and William Smith of the Forest Service Southern Research Station to represent drought and moisture surplus across the contiguous United States. A z-score is a statistical method for assessing how different a value is from the mean. Mean moisture values over 1-year windows were derived from monthly historical precipitation and temperature data from PRISM, between 1900 and 2023, and compared against a 1900-2017 baseline. The greater the z-value, the larger the departure from average conditions, indicating larger moisture deficits (droughts) or surpluses. Thus, the dark orange areas on the map indicate a 1-year window with extreme drought, relative to the average conditions over the past century. For further reading on the methodology used to build these maps, see the publication here: https://www.fs.usda.gov/treesearch/pubs/43361Detailed technical methods for this analysis are available here: https://www.fs.usda.gov/treesearch/pubs/43361. This is derived from monthly PRISM temperature and precipitation data, located here: ftp://prism.nacse.org/monthly/. Monthly temperature data are used to calculate potential evapotranspiration (PET) using the Thornthwaite PET equation. Monthly precipitation and PET data are then used to calculate a moisture index (MI) for each month within a 1-year time window. The mean moisture index (MMI) across the months of the target window is compared to an appropriate long-term normal, in this case the average of the MMI for all windows between 1900 and 2017. Then, a moisture difference z-score (MDZ) is calculated from the MMI for the window of interest. This is done by subtracting the 1900-2017 normal MMI from the MMI for a given year, and then dividing by the standard deviation over the baseline period. Equations for calculating modified moisture index are adopted from Willmott, C.J. and Feddema, J.J. 1992. A more rational climatic moisture index. Professional Geographer 44(1): 84-87. The z-score values were then reclassified using the classification scheme below: z-score less than -2 -- extremely dry compared to normal conditions z-score -2 to -1.5 -- severely dry compared to normal conditions z-score -1.5 to -1 -- moderately dry compared to normal conditions z-score -1 to -0.5 - mildly dry compared to normal conditions z-score -0.5 to 0.5 -- near normal conditions z-score 0.5 to 1 -- mildly wet compared to normal conditions z-score 1 to 1.5 -- moderately wet compared to normal conditions z-score 1.5 to 2 -- severely wet compared to normal conditions z-score more than 2 -- extremely wet compared to normal conditions.

  18. Capital stock; national accounts 1995-2016

    • data.overheid.nl
    • cbs.nl
    • +1more
    atom, json
    Updated Jul 13, 2017
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Centraal Bureau voor de Statistiek (Rijk) (2017). Capital stock; national accounts 1995-2016 [Dataset]. https://data.overheid.nl/dataset/4791-capital-stock--national-accounts-1995-2016
    Explore at:
    json(KB), atom(KB)Available download formats
    Dataset updated
    Jul 13, 2017
    Dataset provided by
    Statistics Netherlands
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This table contains figures on capital stock. The capital stock of different branches and sectors is presented here. The capital stock is broken down by type of capital good.

    Figures of the sectors households and non-profit institutions serving households (NPISH) are from reporting year 2013 onwards no longer separately published. Only their aggregate will be published. The reason for this change is that reliable estimates for NPISH for recent years are no longer available.

    Data available from: 1995 Status of the figures: The figures for the most recent reporting year 2016 are provisional. The status of the figures for 2015 is final.

    Changes as of February 2018 None, this table has been replaced by an updated version. See paragraph 3.

    Changes as from 13 July 2017: Provisional figures on the reporting year 2016 have been added. Using old data has led to incorrect figures for investments of the households and NPISHs for the reporting years 2001-2010. Adjusting for these errors results in different figures on investments and the statistical discrepancies. Smaller differences due to rounding occur for the capital stock opening and closing balance sheet, the depreciation and the revaluation. Data on the years 1995-2000 have been added.

    Changes as from 25 October 2016: A number of corrections have been applied as a result of mistakes in the calculations for the years 2002, 2003, 2004, 2009 and 2015. These mistakes did not result in any changes in the totals for the closing balance sheet, but led to incorrect aggregations of sectors/branches or type of capital good.

    Furthermore the calculation method of the volume indices have been harmonised for the capital stock and non-financial balance sheets. Moreover, the volume index will now be calculated on the basis of rounded figures. Because of these changes in method a maximum difference of 109.5 percent points occurs for series of less than 100 mln. A maximum difference of 2.1 percent points occurs for series larger than 100 mln. Volume indices of series which contain 0 mln of capital stock every year are set at 100, rather than hidden.

    When will new figures be published? Provisional data are published 6 months after the end of the reporting year. Final data are released 18 months after the end of the reporting year. Since the end of June 2016 the release and revision policy of the national accounts have been changed. References to additional information about these changes can be found in section 3.

  19. d

    Asia Outbound: Daily Baltic Spot Index Air Freight & Cargo Data - Expert...

    • datarade.ai
    .csv, .pdf
    Updated Oct 3, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    TAC Index (2025). Asia Outbound: Daily Baltic Spot Index Air Freight & Cargo Data - Expert Panel Tier 1 Airlines and Forwarders [Dataset]. https://datarade.ai/data-products/asia-outbound-daily-baltic-spot-index-air-freight-cargo-da-tac-index
    Explore at:
    .csv, .pdfAvailable download formats
    Dataset updated
    Oct 3, 2025
    Dataset authored and provided by
    TAC Index
    Area covered
    Luxembourg, United Arab Emirates, Brunei Darussalam, Germany, Oman, Thailand, Philippines, Turkey, United Kingdom, Slovenia
    Description

    BAI Daily Spot Air Cargo & Freight Data Indices used for CONTRACT NEGOTIATION, SETTLEMENT & BENCHMARKING.

    Indices based on Expert Panelists comprising some of the largest Airlines and Forwarders in the World.

    Calculating Agent: TAC Index Limited

    API & Charting offered.

    Published Daily reflecting previous day's pricing data.

    Subscribers include Top 10 Fortune 500 companies & major global financial institutions.

    Calculation methodology fully transparent and available on request.

    Indices are calculated in compliance by the Baltic Exchange with Financial Conduct Authority of the UK's guidelines. This proven methodology has been applied by Baltic to other industry segments for over 40 years.

    Data subscriptions available under numerous categories for both current and historical data. Current data is generally for active traders and is more expensive; delayed & historical data is considerably lower cost.

    Contact our Sales Team for detailed pricing information.

  20. B

    Bangladesh BD: Net Barter Terms of Trade Index

    • ceicdata.com
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    CEICdata.com, Bangladesh BD: Net Barter Terms of Trade Index [Dataset]. https://www.ceicdata.com/en/bangladesh/trade-index/bd-net-barter-terms-of-trade-index
    Explore at:
    Dataset provided by
    CEICdata.com
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Dec 1, 2009 - Dec 1, 2020
    Area covered
    Bangladesh
    Variables measured
    Merchandise Trade
    Description

    Bangladesh BD: Net Barter Terms of Trade Index data was reported at 68.332 2000=100 in 2020. This records an increase from the previous number of 65.803 2000=100 for 2019. Bangladesh BD: Net Barter Terms of Trade Index data is updated yearly, averaging 103.596 2000=100 from Dec 1980 (Median) to 2020, with 41 observations. The data reached an all-time high of 162.264 2000=100 in 1985 and a record low of 57.575 2000=100 in 2011. Bangladesh BD: Net Barter Terms of Trade Index data remains active status in CEIC and is reported by World Bank. The data is categorized under Global Database’s Bangladesh – Table BD.World Bank.WDI: Trade Index. Net barter terms of trade index is calculated as the percentage ratio of the export unit value indexes to the import unit value indexes, measured relative to the base year 2000. Unit value indexes are based on data reported by countries that demonstrate consistency under UNCTAD quality controls, supplemented by UNCTAD's estimates using the previous year’s trade values at the Standard International Trade Classification three-digit level as weights. To improve data coverage, especially for the latest periods, UNCTAD constructs a set of average prices indexes at the three-digit product classification of the Standard International Trade Classification revision 3 using UNCTAD’s Commodity Price Statistics, international and national sources, and UNCTAD secretariat estimates and calculates unit value indexes at the country level using the current year's trade values as weights.;United Nations Conference on Trade and Development, Handbook of Statistics and data files, and International Monetary Fund, International Financial Statistics.;;

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Saad Aziz (2023). Dataset for Stock Market Index of 7 Economies [Dataset]. https://www.kaggle.com/datasets/saadaziz1985/dataset-for-stock-market-index-of-7-countries
Organization logo

Dataset for Stock Market Index of 7 Economies

Time Series Dataset for Stock Market Indices of the 7 Top Economies of the World

Explore at:
zip(1917326 bytes)Available download formats
Dataset updated
Jul 4, 2023
Authors
Saad Aziz
License

https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

Description

Context:

The provided dataset is extracted from yahoo finance using pandas and yahoo finance library in python. This deals with stock market index of the world best economies. The code generated data from Jan 01, 2003 to Jun 30, 2023 that’s more than 20 years. There are 18 CSV files, dataset is generated for 16 different stock market indices comprising of 7 different countries. Below is the list of countries along with number of indices extracted through yahoo finance library, while two CSV files deals with annualized return and compound annual growth rate (CAGR) has been computed from the extracted data.

Number of Countries & Index:

https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F15657145%2F90ce8a986761636e3edbb49464b304d8%2FNumber%20of%20Index.JPG?generation=1688490342207096&alt=media" alt="">

Content:

Unit of analysis: Stock Market Index Analysis

This dataset is useful for research purposes, particularly for conducting comparative analyses involving capital market performance and could be used along with other economic indicators.

There are 18 distinct CSV files associated with this dataset. First 16 CSV files deals with number of indices and last two CSV file deals with annualized return of each year and CAGR of each index. If data in any column is blank, it portrays that index was launch in later years, for instance: Bse500 (India), this index launch in 2007, so earlier values are blank, similarly China_Top300 index launch in year 2021 so early fields are blank too.

The extraction process involves applying different criteria, like in 16 CSV files all columns are included, Adj Close is used to calculate annualized return. The algorithm extracts data based on index name (code given by the yahoo finance) according start and end date.

Annualized return and CAGR has been calculated and illustrated in below image along with machine readable file (CSV) attached to that.

To extract the data provided in the attachment, various criteria were applied:

  1. Content Filtering: The data was filtered based on several attributes, including the index name, start and end date. This filtering process ensured that only relevant data meeting the specified criteria.

  2. Collaborative Filtering: Another filtering technique used was collaborative filtering using yahoo finance, which relies on index similarity. This approach involves finding indices that are similar to other index or extended dataset scope to other countries or economies. By leveraging this method, the algorithm identifies and extracts data based on similarities between indices.

In the last two CSV files, one belongs to annualized return, that was calculated based on the Adj close column and new DataFrame created to store its outcome. Below is the image of annualized returns of all index (if unreadable, machine-readable or CSV format is attached with the dataset).

Annualized Return:

As far as annualised rate of return is concerned, most of the time India stock market indices leading, followed by USA, Canada and Japan stock market indices.

https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F15657145%2F37645bd90623ea79f3708a958013c098%2FAnnualized%20Return.JPG?generation=1688525901452892&alt=media" alt="">

Compound Annual Growth Rate (CAGR):

The best performing index based on compound growth is Sensex (India) that comprises of top 30 companies is 15.60%, followed by Nifty500 (India) that is 11.34% and Nasdaq (USA) all is 10.60%.

The worst performing index is China top300, however this is launch in 2021 (post pandemic), so would not possible to examine at that stage (due to less data availability). Furthermore, UK and Russia indices are also top 5 in the worst order.

https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F15657145%2F58ae33f60a8800749f802b46ec1e07e7%2FCAGR.JPG?generation=1688490409606631&alt=media" alt="">

Geography: Stock Market Index of the World Top Economies

Time period: Jan 01, 2003 – June 30, 2023

Variables: Stock Market Index Title, Open, High, Low, Close, Adj Close, Volume, Year, Month, Day, Yearly_Return and CAGR

File Type: CSV file

Inspiration:

  • Time series prediction model
  • Investment opportunities in world best economies
  • Comparative Analysis of past data with other stock market indices or other indices

Disclaimer:

This is not a financial advice; due diligence is required in each investment decision.

Search
Clear search
Close search
Google apps
Main menu