97 datasets found
  1. J

    Elementary index bias: evidence for the euro area from a large scanner...

    • journaldata.zbw.eu
    • datasearch.gesis.org
    stata do
    Updated Mar 3, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Eniko Gábor-Tóth; Philip Vermeulen; Eniko Gábor-Tóth; Philip Vermeulen (2021). Elementary index bias: evidence for the euro area from a large scanner dataset [Dataset]. http://doi.org/10.15456/ger.2018346.155305
    Explore at:
    stata doAvailable download formats
    Dataset updated
    Mar 3, 2021
    Dataset provided by
    ZBW - Leibniz Informationszentrum Wirtschaft
    Authors
    Eniko Gábor-Tóth; Philip Vermeulen; Eniko Gábor-Tóth; Philip Vermeulen
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    We provide evidence on the effect of elementary index choice on inflation measurement in the euro area. Using scanner data for 15,844 individual items from 42 product categories and 10 euro area countries, we compute product category level elementary price indexes using eight different elementary index formulas. Measured inflation outcomes of the different index formulas are compared with the Fisher ideal index to quantify elementary index bias. We have three main findings. First, elementary index bias is quite variable across product categories, countries and index formulas. Second, a comparison of elementary index formulas with and without expenditure weights shows that a shift from price only indexes to expenditure weighted indexes would entail at the product level multiple percentage points differences in measured price changes. And finally, we show that elementary index bias is quantitatively more important than upper level substitution bias.

  2. d

    Data from: DEEPEN Global Standardized Categorical Exploration Datasets for...

    • catalog.data.gov
    • data.openei.org
    • +1more
    Updated Jan 20, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Renewable Energy Laboratory (2025). DEEPEN Global Standardized Categorical Exploration Datasets for Magmatic Plays [Dataset]. https://catalog.data.gov/dataset/deepen-global-standardized-categorical-exploration-datasets-for-magmatic-plays-f1ecf
    Explore at:
    Dataset updated
    Jan 20, 2025
    Dataset provided by
    National Renewable Energy Laboratory
    Description

    DEEPEN stands for DE-risking Exploration of geothermal Plays in magmatic ENvironments. As part of the development of the DEEPEN 3D play fairway analysis (PFA) methodology for magmatic plays (conventional hydrothermal, superhot EGS, and supercritical), weights needed to be developed for use in the weighted sum of the different favorability index models produced from geoscientific exploration datasets. This was done using two different approaches: one based on expert opinions, and one based on statistical learning. This GDR submission includes the datasets used to produce the statistical learning-based weights. While expert opinions allow us to include more nuanced information in the weights, expert opinions are subject to human bias. Data-centric or statistical approaches help to overcome these potential human biases by focusing on and drawing conclusions from the data alone. The drawback is that, to apply these types of approaches, a dataset is needed. Therefore, we attempted to build comprehensive standardized datasets mapping anomalies in each exploration dataset to each component of each play. This data was gathered through a literature review focused on magmatic hydrothermal plays along with well-characterized areas where superhot or supercritical conditions are thought to exist. Datasets were assembled for all three play types, but the hydrothermal dataset is the least complete due to its relatively low priority. For each known or assumed resource, the dataset states what anomaly in each exploration dataset is associated with each component of the system. The data is only a semi-quantitative, where values are either high, medium, or low, relative to background levels. In addition, the dataset has significant gaps, as not every possible exploration dataset has been collected and analyzed at every known or suspected geothermal resource area, in the context of all possible play types. The following training sites were used to assemble this dataset: - Conventional magmatic hydrothermal: Akutan (from AK PFA), Oregon Cascades PFA, Glass Buttes OR, Mauna Kea (from HI PFA), Lanai (from HI PFA), Mt St Helens Shear Zone (from WA PFA), Wind River Valley (From WA PFA), Mount Baker (from WA PFA). - Superhot EGS: Newberry (EGS demonstration project), Coso (EGS demonstration project), Geysers (EGS demonstration project), Eastern Snake River Plain (EGS demonstration project), Utah FORGE, Larderello, Kakkonda, Taupo Volcanic Zone, Acoculco, Krafla. - Supercritical: Coso, Geysers, Salton Sea, Larderello, Los Humeros, Taupo Volcanic Zone, Krafla, Reyjanes, Hengill. **Disclaimer: Treat the supercritical fluid anomalies with skepticism. They are based on assumptions due to the general lack of confirmed supercritical fluid encounters and samples at the sites included in this dataset, at the time of assembling the dataset. The main assumption was that the supercritical fluid in a given geothermal system has shared properties with the hydrothermal fluid, which may not be the case in reality. Once the datasets were assembled, principal component analysis (PCA) was applied to each. PCA is an unsupervised statistical learning technique, meaning that labels are not required on the data, that summarized the directions of variance in the data. This approach was chosen because our labels are not certain, i.e., we do not know with 100% confidence that superhot resources exist at all the assumed positive areas. We also do not have data for any known non-geothermal areas, meaning that it would be challenging to apply a supervised learning technique. In order to generate weights from the PCA, an analysis of the PCA loading values was conducted. PCA loading values represent how much a feature is contributing to each principal component, and therefore the overall variance in the data.

  3. G

    Fixed-weighted index of average hourly earnings, (SEPH)

    • open.canada.ca
    • datasets.ai
    • +2more
    csv, html, xml
    Updated Jan 17, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statistics Canada (2023). Fixed-weighted index of average hourly earnings, (SEPH) [Dataset]. https://open.canada.ca/data/en/dataset/afa548db-554d-4085-9d08-198501bd970c
    Explore at:
    xml, csv, htmlAvailable download formats
    Dataset updated
    Jan 17, 2023
    Dataset provided by
    Statistics Canada
    License

    Open Government Licence - Canada 2.0https://open.canada.ca/en/open-government-licence-canada
    License information was derived automatically

    Description

    This table contains 33 series, with data for years 1983 - 2000 (not all combinations necessarily have data for all years). This table contains data described by the following dimensions (not all combinations are available): Unit of measure (1 items: Index ...), Geography (13 items: Canada;Prince Edward Island;Nova Scotia;Newfoundland and Labrador ...), Standard Industrial Classification, 1980 (SIC) (21 items: Logging and forestry industries;Mining (including milling); quarrying and oil well industries;Goods producing industries;Industrial aggregate excluding unclassified establishments ...), Fixed weighted index, average hourly earnings (1 items: Fixed weighted index; average hourly earnings ...), Type of employee (1 items: All employees ...).

  4. k

    Trading Signals (Taiwan Weighted Index Stock Forecast) (Forecast)

    • kappasignal.com
    Updated Nov 9, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    KappaSignal (2022). Trading Signals (Taiwan Weighted Index Stock Forecast) (Forecast) [Dataset]. https://www.kappasignal.com/2022/11/trading-signals-taiwan-weighted-index.html
    Explore at:
    Dataset updated
    Nov 9, 2022
    Dataset authored and provided by
    KappaSignal
    License

    https://www.kappasignal.com/p/legal-disclaimer.htmlhttps://www.kappasignal.com/p/legal-disclaimer.html

    Description

    This analysis presents a rigorous exploration of financial data, incorporating a diverse range of statistical features. By providing a robust foundation, it facilitates advanced research and innovative modeling techniques within the field of finance.

    Trading Signals (Taiwan Weighted Index Stock Forecast)

    Financial data:

    • Historical daily stock prices (open, high, low, close, volume)

    • Fundamental data (e.g., market capitalization, price to earnings P/E ratio, dividend yield, earnings per share EPS, price to earnings growth, debt-to-equity ratio, price-to-book ratio, current ratio, free cash flow, projected earnings growth, return on equity, dividend payout ratio, price to sales ratio, credit rating)

    • Technical indicators (e.g., moving averages, RSI, MACD, average directional index, aroon oscillator, stochastic oscillator, on-balance volume, accumulation/distribution A/D line, parabolic SAR indicator, bollinger bands indicators, fibonacci, williams percent range, commodity channel index)

    Machine learning features:

    • Feature engineering based on financial data and technical indicators

    • Sentiment analysis data from social media and news articles

    • Macroeconomic data (e.g., GDP, unemployment rate, interest rates, consumer spending, building permits, consumer confidence, inflation, producer price index, money supply, home sales, retail sales, bond yields)

    Potential Applications:

    • Stock price prediction

    • Portfolio optimization

    • Algorithmic trading

    • Market sentiment analysis

    • Risk management

    Use Cases:

    • Researchers investigating the effectiveness of machine learning in stock market prediction

    • Analysts developing quantitative trading Buy/Sell strategies

    • Individuals interested in building their own stock market prediction models

    • Students learning about machine learning and financial applications

    Additional Notes:

    • The dataset may include different levels of granularity (e.g., daily, hourly)

    • Data cleaning and preprocessing are essential before model training

    • Regular updates are recommended to maintain the accuracy and relevance of the data

  5. Stock Market Dataset (NIFTY-500)

    • kaggle.com
    Updated Jun 10, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sourav Banerjee (2023). Stock Market Dataset (NIFTY-500) [Dataset]. https://www.kaggle.com/datasets/iamsouravbanerjee/nifty500-stocks-dataset
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jun 10, 2023
    Dataset provided by
    Kaggle
    Authors
    Sourav Banerjee
    Description

    Context

    NIFTY 500 is India’s first broad-based stock market index of the Indian stock market. It contains the top 500 listed companies on the NSE. The NIFTY 500 index represents about 96.1% of free-float market capitalization and 96.5% of the total turnover on the National Stock Exchange (NSE).

    NIFTY 500 companies are disaggregated into 72 industry indices. Industry weights in the index reflect industry weights in the market. For example, if the banking sector has a 5% weight in the universe of stocks traded on the NSE, banking stocks in the index would also have an approximate representation of 5% in the index. NIFTY 500 can be used for a variety of purposes such as benchmarking fund portfolios, launching index funds, ETFs, and other structured products.

    • Other Notable Indices -
      • NIFTY 50: Top 50 listed companies on the NSE. A diversified 50-stock index accounting for 13 sectors of the Indian economy.
      • NIFTY Next 50: Also called NIFTY Juniors. Represents 50 companies from NIFTY 100 after excluding the NIFTY 50 companies.
      • NIFTY 100: Diversified 100 stock index representing major sectors of the economy. NIFTY 100 represents the top 100 companies based on full market capitalization from NIFTY 500.
      • NIFTY 200: Designed to reflect the behavior and performance of large and mid-market capitalization companies.

    Content

    The dataset comprises various parameters and features for each of the NIFTY 500 Stocks, including Company Name, Symbol, Industry, Series, Open, High, Low, Previous Close, Last Traded Price, Change, Percentage Change, Share Volume, Value in Indian Rupee, 52 Week High, 52 Week Low, 365 Day Percentage Change, and 30 Day Percentage Change.

    Dataset Glossary (Column-Wise)

    Company Name: Name of the Company.

    Symbol: A stock symbol is a unique series of letters assigned to a security for trading purposes.

    Industry: Name of the industry to which the stock belongs.

    Series: EQ stands for Equity. In this series intraday trading is possible in addition to delivery and BE stands for Book Entry. Shares falling in the Trade-to-Trade or T-segment are traded in this series and no intraday is allowed. This means trades can only be settled by accepting or giving the delivery of shares.

    Open: It is the price at which the financial security opens in the market when trading begins. It may or may not be different from the previous day's closing price. The security may open at a higher price than the closing price due to excess demand for the security.

    High: It is the highest price at which a stock is traded during the course of the trading day and is typically higher than the closing or equal to the opening price.

    Low: Today's low is a security's intraday low trading price. Today's low is the lowest price at which a stock trades over the course of a trading day.

    Previous Close: The previous close almost always refers to the prior day's final price of a security when the market officially closes for the day. It can apply to a stock, bond, commodity, futures or option co-contract, market index, or any other security.

    Last Traded Price: The last traded price (LTP) usually differs from the closing price of the day. This is because the closing price of the day on NSE is the weighted average price of the last 30 mins of trading. The last traded price of the day is the actual last traded price.

    Change: For a stock or bond quote, change is the difference between the current price and the last trade of the previous day. For interest rates, change is benchmarked against a major market rate (e.g., LIBOR) and may only be updated as infrequently as once a quarter.

    Percentage Change: Take the selling price and subtract the initial purchase price. The result is the gain or loss. Take the gain or loss from the investment and divide it by the original amount or purchase price of the investment. Finally, multiply the result by 100 to arrive at the percentage change in the investment.

    Share Volume: Volume is an indicator that means the total number of shares that have been bought or sold in a specific period of time or during the trading day. It will also involve the buying and selling of every share during a specific time period.

    Value (Indian Rupee): Market value—also known as market cap—is calculated by multiplying a company's outstanding shares by its current market price.

    52-Week High: A 52-week high is the highest share price that a stock has traded at during a passing year. Many market aficionados view the 52-week high as an important factor in determining a stock's current value and predicting future price movement. 52-week High prices are adjusted for Bonus, Split & Rights Corporate actions.

    52-Week Low: A 52-week low is the lowest ...

  6. k

    Taiwan Weighted Index: A Reliable Indicator of Economic Health? (Forecast)

    • kappasignal.com
    Updated Sep 29, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    KappaSignal (2024). Taiwan Weighted Index: A Reliable Indicator of Economic Health? (Forecast) [Dataset]. https://www.kappasignal.com/2024/09/taiwan-weighted-index-reliable.html
    Explore at:
    Dataset updated
    Sep 29, 2024
    Dataset authored and provided by
    KappaSignal
    License

    https://www.kappasignal.com/p/legal-disclaimer.htmlhttps://www.kappasignal.com/p/legal-disclaimer.html

    Description

    This analysis presents a rigorous exploration of financial data, incorporating a diverse range of statistical features. By providing a robust foundation, it facilitates advanced research and innovative modeling techniques within the field of finance.

    Taiwan Weighted Index: A Reliable Indicator of Economic Health?

    Financial data:

    • Historical daily stock prices (open, high, low, close, volume)

    • Fundamental data (e.g., market capitalization, price to earnings P/E ratio, dividend yield, earnings per share EPS, price to earnings growth, debt-to-equity ratio, price-to-book ratio, current ratio, free cash flow, projected earnings growth, return on equity, dividend payout ratio, price to sales ratio, credit rating)

    • Technical indicators (e.g., moving averages, RSI, MACD, average directional index, aroon oscillator, stochastic oscillator, on-balance volume, accumulation/distribution A/D line, parabolic SAR indicator, bollinger bands indicators, fibonacci, williams percent range, commodity channel index)

    Machine learning features:

    • Feature engineering based on financial data and technical indicators

    • Sentiment analysis data from social media and news articles

    • Macroeconomic data (e.g., GDP, unemployment rate, interest rates, consumer spending, building permits, consumer confidence, inflation, producer price index, money supply, home sales, retail sales, bond yields)

    Potential Applications:

    • Stock price prediction

    • Portfolio optimization

    • Algorithmic trading

    • Market sentiment analysis

    • Risk management

    Use Cases:

    • Researchers investigating the effectiveness of machine learning in stock market prediction

    • Analysts developing quantitative trading Buy/Sell strategies

    • Individuals interested in building their own stock market prediction models

    • Students learning about machine learning and financial applications

    Additional Notes:

    • The dataset may include different levels of granularity (e.g., daily, hourly)

    • Data cleaning and preprocessing are essential before model training

    • Regular updates are recommended to maintain the accuracy and relevance of the data

  7. D

    National House Construction Cost Index

    • find.data.gov.scot
    • dtechtive.com
    • +3more
    csv, json
    Updated Dec 9, 2016
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    DHLGH (uSmart) (2016). National House Construction Cost Index [Dataset]. https://find.data.gov.scot/datasets/38858
    Explore at:
    json(null MB), csv(0.0021 MB)Available download formats
    Dataset updated
    Dec 9, 2016
    Dataset provided by
    DHLGH (uSmart)
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    national
    Description

    The index relates to costs ruling on the first day of each month. NATIONAL HOUSE CONSTRUCTION COST INDEX; Up until October 2006 it was known as the National House Building Index Oct 2000 data; The index since October, 2000, includes the first phase of an agreement following a review of rates of pay and grading structures for the Construction Industry and the first phase increase under the PPF. April, May and June 2001; Figures revised in July 2001due to 2% PPF Revised Terms. March 2002; The drop in the March 2002 figure is due to a decrease in the rate of PRSI from 12% to 10 3/4% with effect from 1 March 2002. The index from April 2002 excludes the one-off lump sum payment equal to 1% of basic pay on 1 April 2002 under the PPF. April, May, June 2003; Figures revised in August'03 due to the backdated increase of 3% from 1April 2003 under the National Partnership Agreement 'Sustaining Progress'. The increases in April and October 2006 index are due to Social Partnership Agreement 'Towards 2016'. March 2011; The drop in the March 2011 figure is due to a 7.5% decrease in labour costs. Methodology in producing the Index Prior to October 2006: The index relates solely to labour and material costs which should normally not exceed 65% of the total price of a house. It does not include items such as overheads, profit, interest charges, land development etc. The House Building Cost Index monitors labour costs in the construction industry and the cost of building materials. It does not include items such as overheads, profit, interest charges or land development. The labour costs include insurance cover and the building material costs include V.A.T. Coverage: The type of construction covered is a typical 3 bed-roomed, 2 level local authority house and the index is applied on a national basis. Data Collection: The labour costs are based on agreed labour rates, allowances etc. The building material prices are collected at the beginning of each month from the same suppliers for the same representative basket. Calculation: Labour and material costs for the construction of a typical 3 bed-roomed house are weighted together to produce the index. Post October 2006: The name change from the House Building Cost Index to the House Construction Cost Index was introduced in October 2006 when the method of assessing the materials sub-index was changed from pricing a basket of materials (representative of a typical 2 storey 3 bedroomed local authority house) to the CSO Table 3 Wholesale Price Index. The new Index does maintains continuity with the old HBCI. The most current data is published on these sheets. Previously published data may be subject to revision. Any change from the originally published data will be highlighted by a comment on the cell in question. These comments will be maintained for at least a year after the date of the value change. Oct 2008 data; Decrease due to a fall in the Oct Wholesale Price Index.

  8. Taiwan Weighted Index Options & Futures Prediction (Forecast)

    • kappasignal.com
    Updated Sep 2, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    KappaSignal (2022). Taiwan Weighted Index Options & Futures Prediction (Forecast) [Dataset]. https://www.kappasignal.com/2022/09/taiwan-weighted-index-options-futures.html
    Explore at:
    Dataset updated
    Sep 2, 2022
    Dataset authored and provided by
    KappaSignal
    License

    https://www.kappasignal.com/p/legal-disclaimer.htmlhttps://www.kappasignal.com/p/legal-disclaimer.html

    Description

    This analysis presents a rigorous exploration of financial data, incorporating a diverse range of statistical features. By providing a robust foundation, it facilitates advanced research and innovative modeling techniques within the field of finance.

    Taiwan Weighted Index Options & Futures Prediction

    Financial data:

    • Historical daily stock prices (open, high, low, close, volume)

    • Fundamental data (e.g., market capitalization, price to earnings P/E ratio, dividend yield, earnings per share EPS, price to earnings growth, debt-to-equity ratio, price-to-book ratio, current ratio, free cash flow, projected earnings growth, return on equity, dividend payout ratio, price to sales ratio, credit rating)

    • Technical indicators (e.g., moving averages, RSI, MACD, average directional index, aroon oscillator, stochastic oscillator, on-balance volume, accumulation/distribution A/D line, parabolic SAR indicator, bollinger bands indicators, fibonacci, williams percent range, commodity channel index)

    Machine learning features:

    • Feature engineering based on financial data and technical indicators

    • Sentiment analysis data from social media and news articles

    • Macroeconomic data (e.g., GDP, unemployment rate, interest rates, consumer spending, building permits, consumer confidence, inflation, producer price index, money supply, home sales, retail sales, bond yields)

    Potential Applications:

    • Stock price prediction

    • Portfolio optimization

    • Algorithmic trading

    • Market sentiment analysis

    • Risk management

    Use Cases:

    • Researchers investigating the effectiveness of machine learning in stock market prediction

    • Analysts developing quantitative trading Buy/Sell strategies

    • Individuals interested in building their own stock market prediction models

    • Students learning about machine learning and financial applications

    Additional Notes:

    • The dataset may include different levels of granularity (e.g., daily, hourly)

    • Data cleaning and preprocessing are essential before model training

    • Regular updates are recommended to maintain the accuracy and relevance of the data

  9. [Video] Taiwan Weighted Index: A Barometer for the Future? (Forecast)

    • kappasignal.com
    Updated Apr 5, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    KappaSignal (2024). [Video] Taiwan Weighted Index: A Barometer for the Future? (Forecast) [Dataset]. https://www.kappasignal.com/2024/04/video-taiwan-weighted-index-barometer-for.html
    Explore at:
    Dataset updated
    Apr 5, 2024
    Dataset authored and provided by
    KappaSignal
    License

    https://www.kappasignal.com/p/legal-disclaimer.htmlhttps://www.kappasignal.com/p/legal-disclaimer.html

    Description

    This analysis presents a rigorous exploration of financial data, incorporating a diverse range of statistical features. By providing a robust foundation, it facilitates advanced research and innovative modeling techniques within the field of finance.

    [Video] Taiwan Weighted Index: A Barometer for the Future?

    Financial data:

    • Historical daily stock prices (open, high, low, close, volume)

    • Fundamental data (e.g., market capitalization, price to earnings P/E ratio, dividend yield, earnings per share EPS, price to earnings growth, debt-to-equity ratio, price-to-book ratio, current ratio, free cash flow, projected earnings growth, return on equity, dividend payout ratio, price to sales ratio, credit rating)

    • Technical indicators (e.g., moving averages, RSI, MACD, average directional index, aroon oscillator, stochastic oscillator, on-balance volume, accumulation/distribution A/D line, parabolic SAR indicator, bollinger bands indicators, fibonacci, williams percent range, commodity channel index)

    Machine learning features:

    • Feature engineering based on financial data and technical indicators

    • Sentiment analysis data from social media and news articles

    • Macroeconomic data (e.g., GDP, unemployment rate, interest rates, consumer spending, building permits, consumer confidence, inflation, producer price index, money supply, home sales, retail sales, bond yields)

    Potential Applications:

    • Stock price prediction

    • Portfolio optimization

    • Algorithmic trading

    • Market sentiment analysis

    • Risk management

    Use Cases:

    • Researchers investigating the effectiveness of machine learning in stock market prediction

    • Analysts developing quantitative trading Buy/Sell strategies

    • Individuals interested in building their own stock market prediction models

    • Students learning about machine learning and financial applications

    Additional Notes:

    • The dataset may include different levels of granularity (e.g., daily, hourly)

    • Data cleaning and preprocessing are essential before model training

    • Regular updates are recommended to maintain the accuracy and relevance of the data

  10. g

    Dataset Direct Download Service (WFS): Noise zones (Type A map, LD index) of...

    • gimi9.com
    • data.europa.eu
    Updated Feb 19, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2022). Dataset Direct Download Service (WFS): Noise zones (Type A map, LD index) of the A570, unlicensed national road network [Dataset]. https://gimi9.com/dataset/eu_fr-120066022-srv-f21c3898-d41f-4323-af97-046cc02242e2
    Explore at:
    Dataset updated
    Feb 19, 2022
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    European Directive 2002/49/EC of 25 June 2002 on the assessment and management of environmental noise entails for EU Member States an assessment of environmental noise in the vicinity of major transport infrastructure (land and air) and in large agglomerations. This assessment is carried out in particular through the development of ‘so-called’ noise maps, the first series of which were drawn up in 2007 (1st deadline of the Directive) and 2012 (2nd deadline). Article L572-5 of the Environmental Code states that these maps are “reviewed, and if necessary revised, at least every five years”. Thus, the implementation of this review leads, in 2017 and as appropriate, to revise or renew the maps previously developed. Strategic Noise Maps (CBS) are designed to allow for the overall assessment of exposure to noise and to forecast its evolution. CBS are required in particular for road infrastructure with annual traffic of more than 3 million vehicles per year. For major road and rail transport infrastructure, the CBS are established, decided and approved under the authority of the prefect of the department. Noise maps are developed according to the indicators established by the European Directive, namely Lden (Day Evening Night Level) and Ln (Night Level). • Day/day: [6h-18h] • Evening/evening: [18h-22h] • Night/night: [22h-6h] The Lden and Ln indicators correspond to a defined energy average over the periods (Day/Black/Night) for Lden and (Night) for Ln. The corresponding results are expressed in A or dB(A) weighted decibels. Areas exposed to noise (type A map): These are two cards representing • areas exposed to more than 55 dB(A) in Lden • areas exposed to more than 50 dB(A) in Ln They are presented in the form of isophone curves materialising areas of the same sound level and are plotted by step of 5 dB(A) from the threshold of 55 dB(A) in Lden and 50 dB(A) in Ln.

  11. f

    Data from: Rivality index neighbourhood algorithm with density and distances...

    • tandf.figshare.com
    xlsx
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    I. Luque Ruiz; M.Á. Gómez-Nieto (2023). Rivality index neighbourhood algorithm with density and distances weighted schemes for the building of robust QSAR classification models with high reliable applicability domain [Dataset]. http://doi.org/10.6084/m9.figshare.9752816.v1
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    Taylor & Francis
    Authors
    I. Luque Ruiz; M.Á. Gómez-Nieto
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The rivality index (RI) is a normalized distance measurement between a molecule and their first nearest neighbours providing a robust prediction of the activity of a molecule based on the known activity of their nearest neighbours. Negative values of the RI describe molecules that would be correctly classified by a statistic algorithm and, vice versa, positive values of this index describe those molecules detected as outliers by the classification algorithms. In this paper, we have described a classification algorithm based on the RI and we have proposed four weighted schemes (kernels) for its calculation based on the measuring of different characteristics of the neighbourhood of molecules for each molecule of the dataset at established values of the threshold of neighbours. The results obtained have demonstrated that the proposed classification algorithm, based on the RI, generates more reliable and robust classification models than many of the more used and well-known machine learning algorithms. These results have been validated and corroborated by using 20 balanced and unbalanced benchmark datasets of different sizes and modelability. The classification models generated provide valuable information about the molecules of the dataset, the applicability domain of the models and the reliability of the predictions.

  12. k

    Is the Taiwan Weighted Index Poised for a Bullish Reversal? (Forecast)

    • kappasignal.com
    Updated Mar 19, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    KappaSignal (2024). Is the Taiwan Weighted Index Poised for a Bullish Reversal? (Forecast) [Dataset]. https://www.kappasignal.com/2024/03/is-taiwan-weighted-index-poised-for.html
    Explore at:
    Dataset updated
    Mar 19, 2024
    Dataset authored and provided by
    KappaSignal
    License

    https://www.kappasignal.com/p/legal-disclaimer.htmlhttps://www.kappasignal.com/p/legal-disclaimer.html

    Description

    This analysis presents a rigorous exploration of financial data, incorporating a diverse range of statistical features. By providing a robust foundation, it facilitates advanced research and innovative modeling techniques within the field of finance.

    Is the Taiwan Weighted Index Poised for a Bullish Reversal?

    Financial data:

    • Historical daily stock prices (open, high, low, close, volume)

    • Fundamental data (e.g., market capitalization, price to earnings P/E ratio, dividend yield, earnings per share EPS, price to earnings growth, debt-to-equity ratio, price-to-book ratio, current ratio, free cash flow, projected earnings growth, return on equity, dividend payout ratio, price to sales ratio, credit rating)

    • Technical indicators (e.g., moving averages, RSI, MACD, average directional index, aroon oscillator, stochastic oscillator, on-balance volume, accumulation/distribution A/D line, parabolic SAR indicator, bollinger bands indicators, fibonacci, williams percent range, commodity channel index)

    Machine learning features:

    • Feature engineering based on financial data and technical indicators

    • Sentiment analysis data from social media and news articles

    • Macroeconomic data (e.g., GDP, unemployment rate, interest rates, consumer spending, building permits, consumer confidence, inflation, producer price index, money supply, home sales, retail sales, bond yields)

    Potential Applications:

    • Stock price prediction

    • Portfolio optimization

    • Algorithmic trading

    • Market sentiment analysis

    • Risk management

    Use Cases:

    • Researchers investigating the effectiveness of machine learning in stock market prediction

    • Analysts developing quantitative trading Buy/Sell strategies

    • Individuals interested in building their own stock market prediction models

    • Students learning about machine learning and financial applications

    Additional Notes:

    • The dataset may include different levels of granularity (e.g., daily, hourly)

    • Data cleaning and preprocessing are essential before model training

    • Regular updates are recommended to maintain the accuracy and relevance of the data

  13. c

    15 Minute City Index

    • data.clevelandohio.gov
    Updated Feb 16, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Cleveland | GIS (2022). 15 Minute City Index [Dataset]. https://data.clevelandohio.gov/datasets/ClevelandGIS::15-minute-city-index/about
    Explore at:
    Dataset updated
    Feb 16, 2022
    Dataset authored and provided by
    Cleveland | GIS
    License

    Open Database License (ODbL) v1.0https://www.opendatacommons.org/licenses/odbl/1.0/
    License information was derived automatically

    Area covered
    Description

    15 Minute City Index Version 10

    The 15 Minute City Index is the output of a weighted sum analysis of all the walksheds from
    
     15 Minute City Points of Interest
    
    gathered by City Planning. The final index value reflects how many points of interest are within walking distance and has no operational implications for City services.
    
    
    
    The animation below demonstrates how the different walking distance areas are combined by weight to create a total index score. Higher scores indicate better access to services, amenities, and stores. Walkability is also shaped by factors such as design, safety, and street environment.
    

    This work is preliminary and in development.

    Data Glossary See the Attributes section below for details about each column in this dataset. The following Amenity Weighting chart should be used in conjunction with the attribute gridcode.

    Amenity Weighting Amenity Type Weight

     Grocery Store5
     High Frequency RTA5
     Schools5
     Healthcare / Hospital3
     Public Library3
     Pharmacy3
     Park Access3
     Daycares3
     Cafes1
     Laundries1
     Bank1
     Fitness Centers1
     Hair Care1
    

    Update Frequency Annually

    Contacts Cleveland City Planning Commission, Strategic Initiatives cityplanning@clevelandohio.gov

  14. e

    Map Viewing Service (WMS) of the dataset: Noise zones (Type C map, LD index)...

    • data.europa.eu
    wms
    Updated Feb 19, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2022). Map Viewing Service (WMS) of the dataset: Noise zones (Type C map, LD index) of the A8, licensed national road network [Dataset]. https://data.europa.eu/data/datasets/fr-120066022-srv-621ced97-1700-4267-a58a-5f79aabac7b0
    Explore at:
    wmsAvailable download formats
    Dataset updated
    Feb 19, 2022
    Description

    European Directive 2002/49/EC of 25 June 2002 on the assessment and management of environmental noise entails for EU Member States an assessment of environmental noise in the vicinity of major transport infrastructure (land and air) and in large agglomerations. This assessment is carried out in particular through the development of ‘so-called’ noise maps, the first series of which were drawn up in 2007 (1st deadline of the Directive) and 2012 (2nd deadline). Article L572-5 of the Environmental Code states that these maps are “reviewed, and if necessary revised, at least every five years”. Thus, the implementation of this review leads, in 2017 and as appropriate, to revise or renew the maps previously developed.

    Strategic Noise Maps (CBS) are designed to allow for the overall assessment of exposure to noise and to forecast its evolution.

    CBS are required in particular for road infrastructure with annual traffic of more than 3 million vehicles per year. For major road and rail transport infrastructure, the CBS are established, decided and approved under the authority of the prefect of the department.

    Noise maps are developed according to the indicators established by the European Directive, namely Lden (Day Evening Night Level) and Ln (Night Level). • Day/day: [6h-18h] • Evening/evening: [18h-22h] • Night/night: [22h-6h] The Lden and Ln indicators correspond to a defined energy average over the periods (Day/Black/Night) for Lden and (Night) for Ln. The corresponding results are expressed in A or dB(A) weighted decibels.

    Type C maps represent areas where noise limit values are exceeded for residential, educational and health buildings. For road and high-speed railway lines, the limit values are 68 dB(A) in Lden and 62 dB(A) in Ln.

  15. a

    15 Minute City Index

    • hub.arcgis.com
    Updated Feb 16, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Cleveland | GIS (2022). 15 Minute City Index [Dataset]. https://hub.arcgis.com/maps/ClevelandGIS::15-minute-city-index
    Explore at:
    Dataset updated
    Feb 16, 2022
    Dataset authored and provided by
    Cleveland | GIS
    License

    Open Database License (ODbL) v1.0https://www.opendatacommons.org/licenses/odbl/1.0/
    License information was derived automatically

    Area covered
    Description

    DescriptionVersion 10

    The 15 Minute City Index is the output of a weighted sum analysis of all the walksheds from
    
     15 Minute City Points of Interest
    
    gathered by City Planning. The final index value reflects how many points of interest are within walking distance and has no operational implications for City services.
    
    
    
    The animation below demonstrates how the different walking distance areas are combined by weight to create a total index score. Higher scores indicate better access to services, amenities, and stores. Walkability is also shaped by factors such as design, safety, and street environment.
    

    This work is preliminary and in development.

    Data Glossary See the Attributes section below for details about each column in this dataset. The following Amenity Weighting chart should be used in conjunction with the attribute gridcode.

    Amenity Weighting Amenity Type Weight

     Grocery Store5
     High Frequency RTA5
     Schools5
     Healthcare / Hospital3
     Public Library3
     Pharmacy3
     Park Access3
     Daycares3
     Cafes1
     Laundries1
     Bank1
     Fitness Centers1
     Hair Care1
    

    Update Frequency Annually

    Contacts Cleveland City Planning Commission, Strategic Initiatives cityplanning@clevelandohio.gov

  16. J

    The CAPM with Measurement Error: "There's life in the old dog yet!"...

    • journaldata.zbw.eu
    .dat, .fmt, .gss +8
    Updated Mar 4, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Winfried Pohlmeier; Anastasia Simmet; Winfried Pohlmeier; Anastasia Simmet (2021). The CAPM with Measurement Error: "There's life in the old dog yet!" Replication data [Dataset]. http://doi.org/10.15456/jbnst.2019064.103528
    Explore at:
    txt, .dat, .fmt, .mat, csv, .gss, .inc, .out, application/vnd.wolfram.mathematica.package, pdb, pdfAvailable download formats
    Dataset updated
    Mar 4, 2021
    Dataset provided by
    ZBW - Leibniz Informationszentrum Wirtschaft
    Authors
    Winfried Pohlmeier; Anastasia Simmet; Winfried Pohlmeier; Anastasia Simmet
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The replication data contain MATLAB and GAUSS codes as well as the data required for replication of the results from the paper

    1. Monte Carlo Simulation:

    Contains codes and data for simulation study from Section 3.

    Data:

    • MV.mat, MV.txt- monthly data on market capitalization of the 205 stocks of the S&P500 index obtained from DataStream for the period 01.01.1974-01.05.2015

    • sp500_edata.mat - monthly data on close prices of components of S&P500 index for the period 01.01.1974-01.05.2015 processed to obtain excess returns using as a risk free return data on the risk free return from French & Fama database. Description of the price data from DataStream: "The ‘current’ prices taken at the close of market are stored each day. These stored prices are adjusted for subsequent capital actions, and this adjusted figure then becomes the default price offered on all Research programs. " Description of the excess return of the market from French & Fama database : "the excess return on the market, value-weight return of all CRSP firms incorporated in the US and listed on the NYSE, AMEX, or NASDAQ that have a CRSP share code of 10 or 11 at the beginning of month t, good shares and price data at the beginning of t, and good return data for t minus theone-month Treasury bill rate (from Ibbotson Associates)." From the latest file two separate data files were created (see CAPMsim.m):

    • sp500_stocks.txt, sp500_stocks.mat - monthly data on close prices of 205 components of S&P500 index for the period 01.01.1974-01.05.2015

    • FactorData.txt, FactorData.txt - The Fama & French factors from French & Fama database for a period July 1926 - May 2015.

    Codes:

    • CAPMsim.m - the main code that replicates the Monte Carlo simulation of the artificial market and proxy indexes subject to different types of the measurement error.

    • sure.m- obtains the estimated parameters for the SUR system and performs hypothesis testing of the significance of the coefficients.

    2. Empirical Application

    Contains codes and data for empirical application from Section 4.

    Data:

    • data1203.txt - 120 monthly observations on the excess returns on 20 random stocks from S&P500, S&P500 index return, DJIA return from DataStream and excess return of the CRSP index from French & Fama database for a period 01/06/2005-01/05/2015.
    • data1204.txt - 120 monthly observations on the excess returns on 30 stocks from DJIA, S&P500 index return, DJIA return from DataStream and excess return of the CRSP index from French & Fama database for a period 01/06/2005-01/05/2015.

    • DJSTOCKS_60_FF_Z.dat - 60 monthly observations on the excess returns on 30 stocks from DJIA from DataStream and excess return of the CRSP index from French & Fama database for a period 01/06/2010-01/05/2015.

    • DJSTOCKS_60_SP_Z.dat - 60 monthly observations on the excess returns on 30 stocks from DJIA and S&P500 index return from DataStream for a period 01/06/2010-01/05/2015.

      • DJSTOCKS_60_DJ_Z.dat - 60 monthly observations on the excess returns on 30 stocks from DJIA and DJIA return from DataStream for a period 01/06/2010-01/05/2015.
      • STOCKS_60_FF_Z.dat - 60 monthly observations on the excess returns on 20 random stocks from S&P500 from DataStream and excess return of the CRSP index from French & Fama database for a period 01/06/2010-01/05/2015.
      • STOCKS_60_SP_Z.dat - 60 monthly observations on the excess returns on 20 random stocks from S&P500 and S&P500 index return from DataStream for a period 01/06/2010-01/05/2015.
    • STOCKS_60_DJ_Z.dat - 60 monthly observations on the excess returns on 20 random stocks from S&P500 and DJIA return from DataStream for a period 01/06/2010-01/05/2015.

      Description of the variables in the data sets:

    • Z_1, Z_2,...,Z_20,..., Z_30 - returns of individual stocks depending on the data set.

    • For calculation of the returns adjusted prices from DataStream were used (see data from Monte Carlo simulation part). Risk free return is taken from French & Fama database.

    • Time period was shortened from 120 to 60 observations: 01/06/2010-01/05/2015

    • Excess returns from the market and indeces:

      • Z_SP - 60 observations on excess return of the S&P500 from DataStream
      • Z_DJ - 60 observations on excess return of the DJIA from DataStream
      • Z_FF - 60 observations on excess return of the market from French & Fama database

    Codes:

    • load_stocks120.gss - loads the data on the returns of the randomly selected 20 socks of S&P500 and selects last 60 observations
      • load_djstocks120.gss - loads the data on the returns of the 30 socks of the Dow-Jones Industrial Average Index and selects last 60 observations
      • CAPM.prc- contains functions to estimate CAPM model by SUR and Minimum Distance methods
    • CAPM.inc- sets the format for the output files from the GAUSS procedures
    • CAPM_STOCKS20_FF.gss, CAPM_STOCKS20_DJ.gss, CAPM_STOCKS20_SP.gss, CAPM_DJSTOCKS30_FF.gss,CAPM_DJSTOCKS30_DJ.gss,CAPM_DJSTOCKS30_SP.gss - GAUSS procedures to estimate the CAPM models based on particular data set (20 random stocks or 30 stocks from DJIA as well as different market indexes: S&P500, DJIA, CRSP) and generate separate output files. 2019-03-05 11:51:42.893129 The replication data contain MATLAB and GAUSS codes as well as the data required for replication of the results from the paper
  17. a

    Investigate Core Weighting

    • hub-lincolninstitute.hub.arcgis.com
    • legacy-cities-lincolninstitute.hub.arcgis.com
    • +1more
    Updated Jun 10, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    ArcGIS Maps for the Nation (2016). Investigate Core Weighting [Dataset]. https://hub-lincolninstitute.hub.arcgis.com/datasets/nation::investigate-core-weighting
    Explore at:
    Dataset updated
    Jun 10, 2016
    Dataset authored and provided by
    ArcGIS Maps for the Nation
    Area covered
    Description

    This application allows for scoring of locally filtered Green Infrastructure intact habitat cores which have been shared to ArcGIS Online. The habitat cores shown were derived using a model built by the Green Infrastructure Center Inc. and adapted by Esri. Core* Attributes 6/18/2016

    * What is a core and how is it made? Cores are intact habitat areas at least 100 acres in size and at least 200 meters wide. They are derived from the 2011
    National Land Cover Database. Potential cores areas are selected from land cover categories not containing the word “developed” or those
    categories associated with agriculture uses (crop, hay and pasture lands). The resulting areas are tested for size and width and then converted into unique
    polygons. These polygons are then overlaid with a diverse assortment of physiographic, biologic and hydrographic layers to use in computing a “core quality
    index”. The resulting attributes of these polygons are described below.
    
    
    Acres 
    (Acres)
    – core area in acres.
    
    
    BiodiversityPriorityIndex
    (Biodiversity Priority Index) – 
    the intact core areas were overlaid with the Priority Index Layer (10 km) resolution surface described in the work by Clinton Jenkins et al. “US protected
    

    lands mismatch biodiversity priorities” 4/2015 PNAS (112)16 www.pnas.org/cgi/doi/10.1073/pnas.1418034112. The Priority Index score is a summary for each of 1200 Endemic species of the proportion of species range that is unprotected divided by the area of the species’ range. Values are summed across all endemic species within a taxonomic group and across all taxonomic groups. Cores falling within a priority index category are assigned that priority index value.

    Note that the nominal resolution of the Priority Index data is 10 km. Cores may or may not have endemic species or collections of endemic species within
    them.
    
    
    Class
    (Core Size Class)
    – the size class for each core (area – water). If < 100 acres = fragment, if < 1000 = small, if < 10000 = medium, if > 10K = large.
    
    
    
    
    
    Compactness
    (Compactness)
    – the ratio between the area of the core and the area of a circle with the same perimeter as the core.
    
    
    ELU_Bio_De
    (ELU Bioclimate Description)
    – the name of the primary Ecological Land Unit bioclimate type within each core. An Ecological Land Unit is an area of distinct bioclimate, landform,
    

    lithology and land cover. The data are available from the USGS at http://rmgsc.cr.usgs.gov/outgoing/ecosystems/Global.

    ELU_GLC_De
    (ELU Global Landcover Description)
    – the name of the primary Ecological Land Unit land cover type within each core. An Ecological Land Unit is an area of distinct bioclimate, landform,
    

    lithology and land cover. The data are available from the USGS at http://rmgsc.cr.usgs.gov/outgoing/ecosystems/Global.

    ELU_ID_Maj
    (ELU Majority)
    – the primary Ecological Land Unit appearing within a core. An Ecological Land Unit is an area of distinct bioclimate, landform, lithology and land cover.
    

    The data are available from the USGS at http://rmgsc.cr.usgs.gov/outgoing/ecosystems/Global.

    ELU_Lit_De
    (ELU Lithology Description)
    – the name of the primary Ecological Land Unit lithology type within each core. An Ecological Land Unit is an area of distinct bioclimate, landform,
    

    lithology and land cover. The data are available from the USGS at http://rmgsc.cr.usgs.gov/outgoing/ecosystems/Global.

    ELU_LSF_De
    (ELU Landform Description)
    – the name of the primary Ecological Land Unit landform type within each core. An Ecological Land Unit is an area of distinct bioclimate, landform,
    

    lithology and land cover. The data are available from the USGS at http://rmgsc.cr.usgs.gov/outgoing/ecosystems/Global.

    ELU_SWI
    (ELU Shannon Weaver Diversity Index)
    – the Shannon-Weaver diversity index of the Ecological Land Units appearing within a core. An Ecological Land Unit is an area of distinct bioclimate,
    

    landform, lithology and land cover. The data are available from the USGS at http://rmgsc.cr.usgs.gov/outgoing/ecosystems/Global. Greater diversity is frequently associated with better habitat potential.

    ERL_Descriptor
    (ERL Description)
    – the name of the primary Theobald Ecologically Relevant Landform within a core. From Theobald DM, Harrison-Atlas D, Monahan WB, Albano, CM (2015)
    Ecologically-Relevant Landforms and Physiographic Diversity for Climate Adaptation Planning. PLoS One 10(12):e0143619. doi: 10.1371/journal.pone.0143619
    
    
    ERL_Maj
    (ERL Majority Type)
    – the dominant landform by area appearing with in a core from Theobald’s Ecologically Relevant Landforms. From Theobald DM, Harrison-Atlas D, Monahan WB,
    Albano, CM (2015) Ecologically-Relevant Landforms and Physiographic Diversity for Climate Adaptation Planning. PLoS One 10(12):e0143619. doi:
    10.1371/journal.pone.0143619
    
    
    ERL_SWI
    (ERL Shannon Weaver Diversity Index)
    – the Shannon-Weaver diversity index of the Theobald Ecologically Relevant Landforms appearing within a core. From Theobald DM, Harrison-Atlas D, Monahan
    WB, Albano, CM (2015) Ecologically-Relevant Landforms and Physiographic Diversity for Climate Adaptation Planning. PLoS One 10(12):e0143619. doi:
    10.1371/journal.pone.0143619
    
    
    Greater diversity is frequently associated with better habitat potential.
    
    
    EcolSystem_Redundancy
    (Ecological System Redundancy) – 
    measures the number of TNC Ecoregions Systems in which a GAP Level 3 Ecological Systems occurs. The higher the number, the more Ecoregions an Ecological
    System appears in and the greater its redundancy. Cores are scored with lowest redundancy value appearing within them. Low and very low redundancy values
    represent cores containing unique Ecological Systems. This analysis reproduces the work by Jocelyn Aycrigg et al. “Representations of Ecological Systems
    within the Protected Areas Network of the Continental United States” 2013 PLoS One (8)1, applied rather to finer resolution TNC Ecoregions units.
    
    
    EndemicSpeciesMax
    (Endemic Species Max)
    – the maximum count of endemic species (trees, freshwater fish, amphibians, reptiles, birds, mammals) per core when overlaid with an Endemic Species
    dataset (10 KM) resolution from BiodiversityMapping.org.
    
    
    GAP_EcolSystem_L3_Maj
    (GAP Ecological System Level 3 Majority) – 
    the primary Gap Level 3 Ecological System appearing within a core. The USGS GAP Level 3 code references the Ecological System classification element
    developed by NatureServe, which is focused mainly on habitat identification. Roughly 540 of the 590 ecological systems in the GAP data base appear in these
    

    data. See http://gapanalysis.usgs.gov/gaplandcover/data/land-cover-metadata/#5 for more information.

    GAP_EcolSystems_L3_SWI
    (Ecological System Shannon Weaver Diversity Index) – 
    the Shannon-Weaver diversity index of GAP Level 3 Ecological Systems within a core. The USGS GAP Level 3 code references the Ecological System
    classification element developed by NatureServe, which is focused mainly on habitat identification. Greater diversity is frequently associated with better
    

    habitat potential. Roughly 540 of the 590 ecological systems in the GAP data base appear in these data. See http://gapanalysis.usgs.gov/gaplandcover/data/land-cover-metadata/#5 for more information.

    HM_Mean
    (Human Modified Index Mean Value)
    – the mean of the Theobald Human Modified values appearing in a core. A measure of the degree of human modification, the index ranges from 0.0 for a virgin
    landscape condition to 1.0 for the most heavily modified areas. The average value for the United States is 0.375. The data used to produce these values
    should be both more current and more detailed than the NLCD used for generating the cores. Emphasis was given to attempting to map in particular, energy
    related development. Theobald, DM (2013) A general model to quantify ecological integrity for landscape assessment and US Application. Landscape Ecol
    (2013) 28:1859-1874 doi: 10.1007/s10980-013-9941-6
    
    
    HM_Std
    (Human Modified Index Standard Deviation)
    – the standard deviation of the Theobald Human Modified values appearing within a core. A measure of the degree of human modification, the index ranges
    from 0.0 for a virgin landscape condition to 1.0 for the most heavily modified areas. The average value for the United States is 0.375. The data used to
    produce these values should be both more current and more detailed than the NLCD used for generating the cores. Emphasis was given to attempting to map in
    particular, energy related development. Theobald, DM (2013) A general model to quantify ecological integrity for landscape assessment and US Application.
    Landscape Ecol (2013) 28:1859-1874 doi: 10.1007/s10980-013-9941-6
    
    
    Landform_Maj 
    (Landform Description (Esri))
    – the primary local landform name within a core from Karagulle/Frye method. These are “local” representations of Hammond’s Landform Classification
    categories.
    
    
    NHDPlusFlowLenFtPerAcre
    (NHDPlus Flow Length (ft) per Core Acre) 
    – the length of NHDPlus FTYPE (StreamRiver) and Q0001A => 1.0, in cubic feet within a core / core area in acres. This is a measure of features with
    running water as modeled in the NHDPlusV2 database from the EPA and USGS -
    
      https://www.epa.gov/waterdata/nhdplus-national-hydrography-dataset-plus
    
    . This variable is to distinguish hydrologic features with active flows from intermittent, artificial or pipeline or canal features.
    
    
    NHDPlusFlowLen_ft
    (NHDPlus Stream and River Length (ft) Flow Greater than 1cfs) 
    – the length of NHDPlus FTYPE (StreamRiver) and Q0001A =>
    
  18. The boxers' data of physical fitness

    • figshare.com
    xlsx
    Updated Mar 29, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yuqiang Guo (2023). The boxers' data of physical fitness [Dataset]. http://doi.org/10.6084/m9.figshare.22306735.v4
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Mar 29, 2023
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Yuqiang Guo
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    These data are the raw data of some physical fitness test indicators of outstanding Chinese male boxers. After processing these data, we can get the model we want.

  19. d

    Data from: DEEPEN 3D PFA Favorability Models and 2D Favorability Maps at...

    • catalog.data.gov
    • data.openei.org
    • +1more
    Updated Jan 20, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Renewable Energy Laboratory (2025). DEEPEN 3D PFA Favorability Models and 2D Favorability Maps at Newberry Volcano [Dataset]. https://catalog.data.gov/dataset/deepen-3d-pfa-favorability-models-and-2d-favorability-maps-at-newberry-volcano-7185c
    Explore at:
    Dataset updated
    Jan 20, 2025
    Dataset provided by
    National Renewable Energy Laboratory
    Area covered
    Newberry Volcano
    Description

    DEEPEN stands for DE-risking Exploration of geothermal Plays in magmatic ENvironments. Part of the DEEPEN project involved developing and testing a methodology for a 3D play fairway analysis (PFA) for multiple play types (conventional hydrothermal, superhot EGS, and supercritical). This was tested using new and existing geoscientific exploration datasets at Newberry Volcano. This GDR submission includes images, data, and models related to the 3D favorability and uncertainty models and the 2D favorability and uncertainty maps. The DEEPEN PFA Methodology is based on the method proposed by Poux et al. (2020), which uses the Leapfrog Geothermal software with the Edge extension to conduct PFA in 3D. This method uses all available data to build a 3D geodata model which can be broken down into smaller blocks and analyzed with advanced geostatistical methods. Each data set is imported into a 3D model in Leapfrog and divided into smaller blocks. Conditional queries can then be used to assign each block an index value which conditionally ranks each block's favorability, from 0-5 with 5 being most favorable, for each model (e.g., lithologic, seismic, magnetic, structural). The values between 0-5 assigned to each block are referred to as index values. The final step of the process is to combine all the index models to create a favorability index. This involves multiplying each index model by a given weight and then summing the resulting values. The DEEPEN PFA Methodology follows this approach, but split up by the specific geologic components of each play type. These components are defined as follows for each magmatic play type: 1. Conventional hydrothermal plays in magmatic environments: Heat, fluid, and permeability 2. Superhot EGS plays: Heat, thermal insulation, and producibility (the ability to create and sustain fractures suitable for and EGS reservoir) 3. Supercritical plays: Heat, supercritical fluid, pressure seal, and producibility (the proper permeability and pressure conditions to allow production of supercritical fluid) More information on these components and their development can be found in Kolker et al., 2022. For the purposes of subsurface imaging, it is easier to detect a permeable fluid-filled reservoir than it is to detect separate fluid and permeability components. Therefore, in this analysis, we combine fluid and permeability for conventional hydrothermal plays, and supercritical fluid and producibility for supercritical plays. More information on this process is described in the following sections. We also project the 3D favorability volumes onto 2D surfaces for simplified joint interpretation, and we incorporate an uncertainty component. Uncertainty was modeled using the best approach for the dataset in question, for the datasets where we had enough information to do so. Identifying which subsurface parameters are the least resolved can help qualify current PFA results and focus future efforts in data collection. Where possible, the resulting uncertainty models/indices were weighted using the same weights applied to the respective datasets, and summed, following the PFA methodology above, but for uncertainty. There are two different versions of the Leapfrog model and associated favorability models: - v1.0: The first release in June 2023 - v2.1: The second release, with improvements made to the earthquake catalog (included additional identified events, removed duplicate events), to the temperature model (fixed a deep BHT), and to the index models (updated the seismicity-heat source index models for supercritical and EGS, and the resistivity-insulation index models for all three play types). Also uses the jet color map rather than the magma color map for improved interpretability. - v2.1.1: Updated to include v2.0 uncertainty results (see below for uncertainty model versions) There are two different versions of the associated uncertainty models: - v1.0: The first release in June 2023 - v2.0: The second release, with improvements made to the temperature and fault uncertainty models. ** Note that this submission is deprecated and that a newer submission, linked below and titled "DEEPEN Final 3D PFA Favorability Models and 2D Favorability Maps at Newberry Volcano" contains the final versions of these resources. **

  20. d

    Data from: DEEPEN: Final 3D PFA Favorability Models and 2D Favorability Maps...

    • catalog.data.gov
    • gdr.openei.org
    • +2more
    Updated Jan 20, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Renewable Energy Laboratory (2025). DEEPEN: Final 3D PFA Favorability Models and 2D Favorability Maps at Newberry Volcano [Dataset]. https://catalog.data.gov/dataset/deepen-final-3d-pfa-favorability-models-and-2d-favorability-maps-at-newberry-volcano-2a96b
    Explore at:
    Dataset updated
    Jan 20, 2025
    Dataset provided by
    National Renewable Energy Laboratory
    Area covered
    Newberry Volcano
    Description

    Part of the DEEPEN (DE-risking Exploration of geothermal Plays in magmatic ENvironments) project involved developing and testing a methodology for a 3D play fairway analysis (PFA) for multiple play types (conventional hydrothermal, superhot EGS, and supercritical). This was tested using new and existing geoscientific exploration datasets at Newberry Volcano. This GDR submission includes images, data, and models related to the 3D favorability and uncertainty models and the 2D favorability and uncertainty maps. The DEEPEN PFA Methodology, detailed in the journal article below, is based on the method proposed by Poux & O'brien (2020), which uses the Leapfrog Geothermal software with the Edge extension to conduct PFA in 3D. This method uses all available data to build a 3D geodata model which can be broken down into smaller blocks and analyzed with advanced geostatistical methods. Each data set is imported into a 3D model in Leapfrog and divided into smaller blocks. Conditional queries can then be used to assign each block an index value which conditionally ranks each block's favorability, from 0-5 with 5 being most favorable, for each model (e.g., lithologic, seismic, magnetic, structural). The values between 0-5 assigned to each block are referred to as index values. The final step of the process is to combine all the index models to create a favorability index. This involves multiplying each index model by a given weight and then summing the resulting values. The DEEPEN PFA Methodology follows this approach, but split up by the specific geologic components of each play type. These components are defined as follows for each magmatic play type: 1. Conventional hydrothermal plays in magmatic environments: Heat, fluid, and permeability 2. Superhot EGS plays: Heat, thermal insulation, and producibility (the ability to create and sustain fractures suitable for and EGS reservoir) 3. Supercritical plays: Heat, supercritical fluid, pressure seal, and producibility (the proper permeability and pressure conditions to allow production of supercritical fluid) More information on these components and their development can be found in Kolker et al., (2022). For the purposes of subsurface imaging, it is easier to detect a permeable fluid-filled reservoir than it is to detect separate fluid and permeability components. Therefore, in this analysis, we combine fluid and permeability for conventional hydrothermal plays, and supercritical fluid and producibility for supercritical plays. We also project the 3D favorability volumes onto 2D surfaces for simplified joint interpretation, and we incorporate an uncertainty component. Uncertainty was modeled using the best approach for the dataset in question, for the datasets where we had enough information to do so. Identifying which subsurface parameters are the least resolved can help qualify current PFA results and focus future efforts in data collection. Where possible, the resulting uncertainty models/indices were weighted using the same weights applied to the respective datasets, and summed, following the PFA methodology above, but for uncertainty.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Eniko Gábor-Tóth; Philip Vermeulen; Eniko Gábor-Tóth; Philip Vermeulen (2021). Elementary index bias: evidence for the euro area from a large scanner dataset [Dataset]. http://doi.org/10.15456/ger.2018346.155305

Elementary index bias: evidence for the euro area from a large scanner dataset

Explore at:
13 scholarly articles cite this dataset (View in Google Scholar)
stata doAvailable download formats
Dataset updated
Mar 3, 2021
Dataset provided by
ZBW - Leibniz Informationszentrum Wirtschaft
Authors
Eniko Gábor-Tóth; Philip Vermeulen; Eniko Gábor-Tóth; Philip Vermeulen
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

We provide evidence on the effect of elementary index choice on inflation measurement in the euro area. Using scanner data for 15,844 individual items from 42 product categories and 10 euro area countries, we compute product category level elementary price indexes using eight different elementary index formulas. Measured inflation outcomes of the different index formulas are compared with the Fisher ideal index to quantify elementary index bias. We have three main findings. First, elementary index bias is quite variable across product categories, countries and index formulas. Second, a comparison of elementary index formulas with and without expenditure weights shows that a shift from price only indexes to expenditure weighted indexes would entail at the product level multiple percentage points differences in measured price changes. And finally, we show that elementary index bias is quantitatively more important than upper level substitution bias.

Search
Clear search
Close search
Google apps
Main menu