74 datasets found
  1. d

    Fixed Income Data | Financial Models | 400+ Issuers | High Yield |...

    • datarade.ai
    .csv, .xls
    Updated Dec 6, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lucror Analytics (2024). Fixed Income Data | Financial Models | 400+ Issuers | High Yield | Fundamental Analysis | Analyst-adjusted | Europe, Asia, LatAm | Financial Modelling [Dataset]. https://datarade.ai/data-products/lucror-analytics-corporate-data-financial-models-400-b-lucror-analytics
    Explore at:
    .csv, .xlsAvailable download formats
    Dataset updated
    Dec 6, 2024
    Dataset authored and provided by
    Lucror Analytics
    Area covered
    China, Bonaire, Gibraltar, Croatia, Guatemala, Lebanon, State of, Sri Lanka, Dominican Republic, India
    Description

    Lucror Analytics: Fundamental Fixed Income Data and Financial Models for High-Yield Bond Issuers

    At Lucror Analytics, we deliver expertly curated data solutions focused on corporate credit and high-yield bond issuers across Europe, Asia, and Latin America. Our data offerings integrate comprehensive fundamental analysis, financial models, and analyst-adjusted insights tailored to support professionals in the credit and fixed-income sectors. Covering 400+ bond issuers, our datasets provide a high level of granularity, empowering asset managers, institutional investors, and financial analysts to make informed decisions with confidence.

    By combining proprietary financial models with expert analysis, we ensure our Fixed Income Data is actionable, precise, and relevant. Whether you're conducting credit risk assessments, building portfolios, or identifying investment opportunities, Lucror Analytics offers the tools you need to navigate the complexities of high-yield markets.

    What Makes Lucror’s Fixed Income Data Unique?

    Comprehensive Fundamental Analysis Our datasets focus on issuer-level credit data for complex high-yield bond issuers. Through rigorous fundamental analysis, we provide deep insights into financial performance, credit quality, and key operational metrics. This approach equips users with the critical information needed to assess risk and uncover opportunities in volatile markets.

    Analyst-Adjusted Insights Our data isn’t just raw numbers—it’s refined through the expertise of seasoned credit analysts with 14 years average fixed income experience. Each dataset is carefully reviewed and adjusted to reflect real-world conditions, providing clients with actionable intelligence that goes beyond automated outputs.

    Focus on High-Yield Markets Lucror’s specialization in high-yield markets across Europe, Asia, and Latin America allows us to offer a targeted and detailed dataset. This focus ensures that our clients gain unparalleled insights into some of the most dynamic and complex credit markets globally.

    How Is the Data Sourced? Lucror Analytics employs a robust and transparent methodology to source, refine, and deliver high-quality data:

    • Public Sources: Includes issuer filings, bond prospectuses, financial reports, and market data.
    • Proprietary Analysis: Leveraging proprietary models, our team enriches raw data to provide actionable insights.
    • Expert Review: Data is validated and adjusted by experienced analysts to ensure accuracy and relevance.
    • Regular Updates: Models are continuously updated to reflect market movements, regulatory changes, and issuer-specific developments.

    This rigorous process ensures that our data is both reliable and actionable, enabling clients to base their decisions on solid foundations.

    Primary Use Cases 1. Fundamental Research Institutional investors and analysts rely on our data to conduct deep-dive research into specific issuers and sectors. The combination of raw data, adjusted insights, and financial models provides a comprehensive foundation for decision-making.

    1. Credit Risk Assessment Lucror’s financial models provide detailed credit risk evaluations, enabling investors to identify potential vulnerabilities and mitigate exposure. Analyst-adjusted insights offer a nuanced understanding of creditworthiness, making it easier to distinguish between similar issuers.

    2. Portfolio Management Lucror’s datasets support the development of diversified, high-performing portfolios. By combining issuer-level data with robust financial models, asset managers can balance risk and return while staying aligned with investment mandates.

    3. Strategic Decision-Making From assessing market trends to evaluating individual issuers, Lucror’s data empowers organizations to make informed, strategic decisions. The regional focus on Europe, Asia, and Latin America offers unique insights into high-growth and high-risk markets.

    Key Features of Lucror’s Data - 400+ High-Yield Bond Issuers: Coverage across Europe, Asia, and Latin America ensures relevance in key regions. - Proprietary Financial Models: Created by one of the best independent analyst teams on the street. - Analyst-Adjusted Data: Insights refined by experts to reflect off-balance sheet items and idiosyncrasies. - Customizable Delivery: Data is provided in formats and frequencies tailored to the needs of individual clients.

    Why Choose Lucror Analytics? Lucror Analytics and independent provider free from conflicts of interest. We are committed to delivering high-quality financial models for credit and fixed-income professionals. Our proprietary approach combines proprietary models with expert insights, ensuring accuracy, relevance, and utility.

    By partnering with Lucror Analytics, you can: - Safe costs and create internal efficiencies by outsourcing a highly involved and time-consuming processes, including financial analysis and modelling. - Enhance your credit risk ...

  2. Frequently leveraged external data sources for global enterprises 2020

    • statista.com
    Updated Jul 1, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2025). Frequently leveraged external data sources for global enterprises 2020 [Dataset]. https://www.statista.com/statistics/1235514/worldwide-popular-external-data-sources-companies/
    Explore at:
    Dataset updated
    Jul 1, 2025
    Dataset authored and provided by
    Statistahttp://statista.com/
    Time period covered
    Aug 2020
    Area covered
    Worldwide
    Description

    In 2020, according to respondents surveyed, data masters typically leverage a variety of external data sources to enhance their insights. The most popular external data sources for data masters being publicly available competitor data, open data, and proprietary datasets from data aggregators, with **, **, and ** percent, respectively.

  3. d

    Mobile Location Data | Asia | +300M Unique Devices | +100M Daily Users |...

    • datarade.ai
    .json, .csv, .xls
    Updated Mar 21, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Quadrant (2025). Mobile Location Data | Asia | +300M Unique Devices | +100M Daily Users | +200B Events / Month [Dataset]. https://datarade.ai/data-products/mobile-location-data-asia-300m-unique-devices-100m-da-quadrant
    Explore at:
    .json, .csv, .xlsAvailable download formats
    Dataset updated
    Mar 21, 2025
    Dataset authored and provided by
    Quadrant
    Area covered
    Asia, Iran (Islamic Republic of), Oman, Israel, Palestine, Armenia, Georgia, Korea (Democratic People's Republic of), Bahrain, Kyrgyzstan, Philippines
    Description

    Quadrant provides Insightful, accurate, and reliable mobile location data.

    Our privacy-first mobile location data unveils hidden patterns and opportunities, provides actionable insights, and fuels data-driven decision-making at the world's biggest companies.

    These companies rely on our privacy-first Mobile Location and Points-of-Interest Data to unveil hidden patterns and opportunities, provide actionable insights, and fuel data-driven decision-making. They build better AI models, uncover business insights, and enable location-based services using our robust and reliable real-world data.

    We conduct stringent evaluations on data providers to ensure authenticity and quality. Our proprietary algorithms detect, and cleanse corrupted and duplicated data points – allowing you to leverage our datasets rapidly with minimal processing or cleaning. During the ingestion process, our proprietary Data Filtering Algorithms remove events based on a number of both qualitative factors, as well as latency and other integrity variables to provide more efficient data delivery. The deduplicating algorithm focuses on a combination of four important attributes: Device ID, Latitude, Longitude, and Timestamp. This algorithm scours our data and identifies rows that contain the same combination of these four attributes. Post-identification, it retains a single copy and eliminates duplicate values to ensure our customers only receive complete and unique datasets.

    We actively identify overlapping values at the provider level to determine the value each offers. Our data science team has developed a sophisticated overlap analysis model that helps us maintain a high-quality data feed by qualifying providers based on unique data values rather than volumes alone – measures that provide significant benefit to our end-use partners.

    Quadrant mobility data contains all standard attributes such as Device ID, Latitude, Longitude, Timestamp, Horizontal Accuracy, and IP Address, and non-standard attributes such as Geohash and H3. In addition, we have historical data available back through 2022.

    Through our in-house data science team, we offer sophisticated technical documentation, location data algorithms, and queries that help data buyers get a head start on their analyses. Our goal is to provide you with data that is “fit for purpose”.

  4. Aggregate out-of-sample r2 and NRMSE for sequential nowcasting, indexed by...

    • plos.figshare.com
    xls
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Chris Browne; David S. Matteson; Linden McBride; Leiqiu Hu; Yanyan Liu; Ying Sun; Jiaming Wen; Christopher B. Barrett (2023). Aggregate out-of-sample r2 and NRMSE for sequential nowcasting, indexed by methodology and prevalence. [Dataset]. http://doi.org/10.1371/journal.pone.0255519.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Chris Browne; David S. Matteson; Linden McBride; Leiqiu Hu; Yanyan Liu; Ying Sun; Jiaming Wen; Christopher B. Barrett
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Aggregate out-of-sample r2 and NRMSE for sequential nowcasting, indexed by methodology and prevalence.

  5. d

    Mobile Location Data | United Kingdom | +45M Unique Devices | +15M Daily...

    • datarade.ai
    .json, .csv, .xls
    Updated Mar 25, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Quadrant (2025). Mobile Location Data | United Kingdom | +45M Unique Devices | +15M Daily Users | +15B Events / Month [Dataset]. https://datarade.ai/data-products/mobile-location-data-united-kingdom-45m-unique-devices-quadrant
    Explore at:
    .json, .csv, .xlsAvailable download formats
    Dataset updated
    Mar 25, 2025
    Dataset authored and provided by
    Quadrant
    Area covered
    United Kingdom
    Description

    Quadrant provides Insightful, accurate, and reliable mobile location data.

    Our privacy-first mobile location data unveils hidden patterns and opportunities, provides actionable insights, and fuels data-driven decision-making at the world's biggest companies.

    These companies rely on our privacy-first Mobile Location and Points-of-Interest Data to unveil hidden patterns and opportunities, provide actionable insights, and fuel data-driven decision-making. They build better AI models, uncover business insights, and enable location-based services using our robust and reliable real-world data.

    We conduct stringent evaluations on data providers to ensure authenticity and quality. Our proprietary algorithms detect, and cleanse corrupted and duplicated data points – allowing you to leverage our datasets rapidly with minimal processing or cleaning. During the ingestion process, our proprietary Data Filtering Algorithms remove events based on a number of both qualitative factors, as well as latency and other integrity variables to provide more efficient data delivery. The deduplicating algorithm focuses on a combination of four important attributes: Device ID, Latitude, Longitude, and Timestamp. This algorithm scours our data and identifies rows that contain the same combination of these four attributes. Post-identification, it retains a single copy and eliminates duplicate values to ensure our customers only receive complete and unique datasets.

    We actively identify overlapping values at the provider level to determine the value each offers. Our data science team has developed a sophisticated overlap analysis model that helps us maintain a high-quality data feed by qualifying providers based on unique data values rather than volumes alone – measures that provide significant benefit to our end-use partners.

    Quadrant mobility data contains all standard attributes such as Device ID, Latitude, Longitude, Timestamp, Horizontal Accuracy, and IP Address, and non-standard attributes such as Geohash and H3. In addition, we have historical data available back through 2022.

    Through our in-house data science team, we offer sophisticated technical documentation, location data algorithms, and queries that help data buyers get a head start on their analyses. Our goal is to provide you with data that is “fit for purpose”.

  6. Mean country level out-of-sample r2 and NRMSE for sequential nowcasting,...

    • plos.figshare.com
    xls
    Updated Jun 9, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Chris Browne; David S. Matteson; Linden McBride; Leiqiu Hu; Yanyan Liu; Ying Sun; Jiaming Wen; Christopher B. Barrett (2023). Mean country level out-of-sample r2 and NRMSE for sequential nowcasting, indexed by methodology and prevalence. [Dataset]. http://doi.org/10.1371/journal.pone.0255519.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 9, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Chris Browne; David S. Matteson; Linden McBride; Leiqiu Hu; Yanyan Liu; Ying Sun; Jiaming Wen; Christopher B. Barrett
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Mean country level out-of-sample r2 and NRMSE for sequential nowcasting, indexed by methodology and prevalence.

  7. d

    Dataplex: United Healthcare Transparency in Coverage | 76,000+ US Employers...

    • datarade.ai
    .json
    Updated Jan 1, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dataplex (2025). Dataplex: United Healthcare Transparency in Coverage | 76,000+ US Employers | Insurance Data | Ideal for Healthcare Cost Analysis [Dataset]. https://datarade.ai/data-products/dataplex-united-healthcare-transparency-in-coverage-76-000-dataplex
    Explore at:
    .jsonAvailable download formats
    Dataset updated
    Jan 1, 2025
    Dataset authored and provided by
    Dataplex
    Area covered
    United States of America
    Description

    United Healthcare Transparency in Coverage Dataset

    Unlock the power of healthcare pricing transparency with our comprehensive United Healthcare Transparency in Coverage dataset. This invaluable resource provides unparalleled insights into healthcare costs, enabling data-driven decision-making for insurers, employers, researchers, and policymakers.

    Key Features:

    • Extensive Coverage: Access detailed pricing information for a wide range of medical procedures and services across the United States, covering approximately 76,000 employers.
    • Granular Data: Analyze costs at the provider, plan, and employer levels, allowing for in-depth comparisons and trend analysis.
    • Massive Scale: Over 400TB of data generated monthly, providing a wealth of information for comprehensive analysis.
    • Historical Perspective: Track pricing changes over time to identify patterns and forecast future trends.
    • Regular Updates: Stay current with the latest pricing information, ensuring your analyses are always based on the most recent data.

    Detailed Data Points:

    For each of the 76,000 employers, the dataset includes: 1. In-network negotiated rates for covered items and services 2. Historical out-of-network allowed amounts and billed charges 3. Cost-sharing information for specific items and services 4. Pricing data for medical procedures and services across providers, plans, and employers

    Use Cases

    For Insurers: - Benchmark your rates against competitors - Optimize network design and provider contracting - Develop more competitive and cost-effective insurance products

    For Employers: - Make informed decisions about health plan offerings - Negotiate better rates with insurers and providers - Implement cost-saving strategies for employee healthcare

    For Researchers: - Conduct in-depth studies on healthcare pricing variations - Analyze the impact of policy changes on healthcare costs - Investigate regional differences in healthcare pricing

    For Policymakers: - Develop evidence-based healthcare policies - Monitor the effectiveness of price transparency initiatives - Identify areas for potential cost-saving interventions

    Data Delivery

    Our flexible data delivery options ensure you receive the information you need in the most convenient format:

    • Custom Extracts: We can provide targeted datasets focusing on specific regions, procedures, or time periods.
    • Regular Reports: Receive scheduled updates tailored to your specific requirements.

    Why Choose Our Dataset?

    1. Expertise: Our team has extensive experience in healthcare data retrieval and analysis, ensuring high-quality, reliable data.
    2. Customization: We can tailor the dataset to meet your specific needs, whether you're interested in particular companies, regions, or procedures.
    3. Scalability: Our infrastructure is designed to handle the massive scale of this dataset (400TB+ monthly), allowing us to provide comprehensive coverage without compromise.
    4. Support: Our dedicated team is available to assist with data interpretation and technical support.

    Harness the power of healthcare pricing transparency to drive your business forward. Contact us today to discuss how our United Healthcare Transparency in Coverage dataset can meet your specific needs and unlock valuable insights for your organization.

  8. Geo-credit score values in the U.S. 2020, by age

    • statista.com
    Updated Jul 10, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2025). Geo-credit score values in the U.S. 2020, by age [Dataset]. https://www.statista.com/statistics/1048382/geo-credit-scores-of-americans-by-age/
    Explore at:
    Dataset updated
    Jul 10, 2025
    Dataset authored and provided by
    Statistahttp://statista.com/
    Time period covered
    Sep 2020
    Area covered
    United States
    Description

    As of September 2020, approximately three percent of Americans aged 55 to 59 had a geo-credit score of at least ***. This age group has the highest share of every bucket except the lowest, suggesting that it simply has the most members in the sample. This proprietary data from Infutor shows the credit-worthiness of consumers. They utilized ***** proprietary demographic, psychographic, attitudinal, econometric and summarized credit attributes to build the GeoCredit Score database. GeoCredit scores ranges from A (highest traditional score value) to T (lowest traditional score value).

  9. Aggregate mean and standard deviation of out-of-sample r2 and NRMSE for...

    • plos.figshare.com
    xls
    Updated Jun 9, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Chris Browne; David S. Matteson; Linden McBride; Leiqiu Hu; Yanyan Liu; Ying Sun; Jiaming Wen; Christopher B. Barrett (2023). Aggregate mean and standard deviation of out-of-sample r2 and NRMSE for contemporaneous prediction, indexed by methodology and indicator. [Dataset]. http://doi.org/10.1371/journal.pone.0255519.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 9, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Chris Browne; David S. Matteson; Linden McBride; Leiqiu Hu; Yanyan Liu; Ying Sun; Jiaming Wen; Christopher B. Barrett
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Aggregate mean and standard deviation of out-of-sample r2 and NRMSE for contemporaneous prediction, indexed by methodology and indicator.

  10. d

    Data from: Sample IEEE123 Bus system for OEDI SI

    • catalog.data.gov
    • data.openei.org
    • +1more
    Updated Jun 15, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Renewable Energy Laboratory (2024). Sample IEEE123 Bus system for OEDI SI [Dataset]. https://catalog.data.gov/dataset/sample-ieee123-bus-system-for-oedi-si
    Explore at:
    Dataset updated
    Jun 15, 2024
    Dataset provided by
    National Renewable Energy Laboratory
    Description

    Time series load and PV data from an IEEE123 bus system. An example electrical system, named the OEDI SI feeder, is used to test the workflow in a co-simulation. The system used is the IEEE123 test system, which is a well studied test system (see link below to IEEE PES Test Feeder), but some modifications were made to it to add some solar power modules and measurements on the system. The aim of this project is to create an easy-to-use platform where various types of analytics can be performed on a wide range of electrical grid datasets. The aim is to establish an open-source library of algorithms that universities, national labs and other developers can contribute to which can be used on both open-source and proprietary grid data to improve the analysis of electrical distribution systems for the grid modeling community. OEDI Systems Integration (SI) is a grid algorithms and data analytics API created to standardize how data is sent between different modules that are run as part of a co-simulation. The readme file included in the S3 bucket provides information about the directory structure and how to use the algorithms. The sensors.json file is used to define the measurement locations.

  11. e

    Hyperspectral data-cubes and reference pollutants of 302 urban wastewater...

    • opendata.eawag.ch
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hyperspectral data-cubes and reference pollutants of 302 urban wastewater samples - Package - ERIC [Dataset]. https://opendata.eawag.ch/dataset/hyperspectral-data-cubes-and-reference-pollutants-of-302-urban-wastewater-samples
    Explore at:
    Description

    Overview of the experiment We conducted this experiment to collect a dataset of hyperspectral data-cubes of wastewater samples, along with reference laboratory analyses of various wastewater pollutants. The goal was to train data-driven models to predict pollution levels in a sample using hyperspectral data-cubes. Therefore, for ten days, we collected samples from four wastewater treatment facilities around Melbourne, Australia. The samples come from three urban wastewater treatment facilities and one stormwater treatment facility. We conducted the sampling between 04/08/2024 and 15/08/2024. Once sampled, we analysed wastewater in the laboratory for reference physical and chemical pollutants and acquired hyperspectral images. To extend the dataset, we also created a combination of stormwater and wastewater samples for which we measured a hyperspectral data-cube and some reference pollutants. This repository also includes background information about data pre-processing and validation. Repository organization: How to use the data? The repository is organized into numbered folders. Most folders contain a readme.md file in Markdown format, explaining their contents. All data are stored in non-proprietary formats: CSV for most files, except for hyperspectral acquisitions, which are in ENVI format (compatible with Python). Raw data are kept in their original format, sometimes lacking metadata such as units or column descriptions. This information is provided in the corresponding readme.md files. Pre-processed data, however, contain consistent column names, including units. Jupyter notebooks are included to pre-process and validate the data.

  12. m

    Data on departure reasons in US CEO turnover over 1992-2019

    • data.mendeley.com
    Updated Jul 7, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dmitriy Chulkov (2022). Data on departure reasons in US CEO turnover over 1992-2019 [Dataset]. http://doi.org/10.17632/9mh4dg4rfn.3
    Explore at:
    Dataset updated
    Jul 7, 2022
    Authors
    Dmitriy Chulkov
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    We present a dataset created from merged secondary sources of ExecuComp and CompuStat and then augmented with manual data collection through searches of news stories related to CEO turnover.

    We start dataset construction with the ExecuComp executive-level data for the period from 1992 through 2020. These data are merged with the CompuStat dataset of financial variables. As the dataset is intended for research on CEO turnover, we exclude observations in which the CEO at the start of the fiscal year is not well-defined; these are cases when there were co-CEOs and cases when the CEO was shared across different firms. The data set also excludes firm/year combinations that involve a restructuring of the firm – spinoff, buyout, merger, or bankruptcy.

    We identify the CEO at the start of each year for each firm. This also helps identify the last year an individual served as CEO. In order to identify CEO turnover based on changes in the CEO from year to year, we require firm observations to extend over at least six contiguous years for the firm to remain in the sample. Cases involving the last year the firm is in the sample are excluded. We also exclude from the dataset cases when there was an interim CEO who stayed in the position for less than 2 years. This results in a sample of 3,100 firms reflecting 41,773 firm/year combinations.

    For this sample, we examine news articles related to CEO turnover to confirm the reasons for each CEO departure case. We use the ProQuest full-text news database and search for the company name, the executive name, and the departure year. We identify news articles mentioning the turnover case and then classify the explanation of each CEO departure case into one of five categories of turnover. These categories represent CEOs who resigned, were fired, retired, left due to illness or death, and those who left the position but stayed with the firm in a change of duties, respectively.

    The published data file does not include proprietary data from ExecuComp and CompuStat such as executive names and firm financial data. These data fields may be merged with the current data file using the provided ExecuComp and CompuStat identifiers.

    The dataset consists of a single table containing the following fields: • gvkey – unique identifier for the firms retrieved from CompuStat database • firmid – unique firm identifier to distinguish distinct contiguous time periods created by breaks in a firm’s presence in the dataset • coname – company name as listed in the CompuStat database • execid – unique identifier for the executives retrieved from ExecuComp database • year – fiscal year • reason – reason for the eventual departure of the CEO executive from the firm, this field is blank for executives who did not leave the firm during the sample period • ceo_departure – dummy variable that equals 1 if the executive left the firm in the fiscal year, and 0 otherwise

  13. d

    Data from: Lidar - LMCT - WTX WindTracer, Gordon Ridge - Raw Data

    • catalog.data.gov
    • data.openei.org
    • +1more
    Updated Apr 26, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wind Energy Technologies Office (WETO) (2022). Lidar - LMCT - WTX WindTracer, Gordon Ridge - Raw Data [Dataset]. https://catalog.data.gov/dataset/lidar-esrl-windcube-200s-wasco-airport-processed-data
    Explore at:
    Dataset updated
    Apr 26, 2022
    Dataset provided by
    Wind Energy Technologies Office (WETO)
    Description

    Overview Long-range scanning Doppler lidar located on Gordon Ridge. The WindTracer provides high-resolution, long-range lidar data for use in the WFIP2 program. Data Details The system is configured to take data in three different modes. All three modes take 15 minutes to complete and are started at 00, 15, 30, and 45 minutes after the hour. The first nine minutes of the period are spent performing two high-resolution, long-range Plan Position Indicator (PPI) scans at 0.0 and -1.0 degree elevation angles (tilts). These data have file names annotated with HiResPPI noted in the "optional fields" of the file name; for example: lidar.z09.00.20150801.150000.HiResPPI.prd. The next six minutes are spent performing higher altitude PPI scans and Range Height Indicator (RHI) scans. The PPI scans are completed at 6.0- and 30.0-degree elevations, and the RHI scans are completed from below the horizon (down into valleys, as able), up to 40 degrees elevation at 010-, 100-, 190-, and 280-degree azimuths. These files are annotated with PPI-RHI noted in the optional fields of the file name; for example: lidar.z09.00.20150801.150900.PPI-RHI.prd *The last minute is spent measuring a high-altitude vertical wind profile. Generally, this dataset will include data from near ground level up to the top of the planetary boundary layer (PBL), and higher altitude data when high-level cirrus or other clouds are present. The Velocity Azimuth Display (VAD) is measured using six lines of sight at an elevation angle of 75 degrees at azimuth angles of 000, 060, 120, 180, 240, and 300 degrees from True North. The files are annotated with VAD in the optional fields of the file name; for example: lidar.z09.00.20150801.151400.VAD.prd. LMCT does have a data format document that can be provided to users who need programming access to the data. This document is proprietary information but can be supplied to anyone after signing a non-disclosure agreement (NDA). To initiate the NDA process, please contact Keith Barr at keith.barr@lmco.com. The data are not proprietary, only the manual describing the data format. Data Quality Lockheed Martin Coherent Technologies (LMCT) has implemented and refined data quality analysis over the last 14 years, and this installation uses standard data-quality processing procedures. Generally, filtered data products can be accepted as fully data qualified. Secondary processing, such as wind vector analysis, should be used with some caution as the data-quality filters still are "young" and incorrect values can be encountered. Uncertainty Uncertainty in the radial wind measurements (the system's base measurement) varies slightly with range. For most measurements, accuracy of the filtered radial wind measurements have been shown to be within 0.5 m/s with accuracy better than 0.25 m/s not uncommon for ranges less than 10 km. Constraints Doppler lidar is dependent on aerosol loading in the atmosphere, and the signal can be significantly attenuated in precipitation and fog. These weather situations can reduce range performance significantly, and, in heavy rain or thick fog, range performance can be reduced to zero. Long-range performance depends on adequate aerosol loading to provide enough backscattered laser radiation so that a measurement can be made.

  14. Spending on cloud and data centers 2009-2024, by segment

    • statista.com
    • ai-chatbox.pro
    Updated Jun 23, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2025). Spending on cloud and data centers 2009-2024, by segment [Dataset]. https://www.statista.com/statistics/1114926/enterprise-spending-cloud-and-data-centers/
    Explore at:
    Dataset updated
    Jun 23, 2025
    Dataset authored and provided by
    Statistahttp://statista.com/
    Area covered
    Worldwide
    Description

    In 2024, enterprise spending on cloud infrastructure services amounted to *** billion U.S. dollars, a growth of ** billion U.S. dollars compared to the previous year. The growing market for cloud infrastructure services is driven by organizations' demand for modern networking, storage, and databases solutions. Increased spending on cloud services, mainly on platform as a service The platform as a service (PaaS) segment, which includes analytics, database, and internet of things (IoT) has the highest growth rate within the cloud infrastructure services market. The managed private cloud services share declined in comparison. Infrastructure as a service (IaaS) remained relatively steady, with companies like Amazon Web Services and Microsoft dominating the market. However, software as a service (SaaS) is not included, which itself continues to experience growth in end-user spending worldwide. Data center spending declined in 2020 Enterprise spending on data center hardware and software, on the other hand, began to slightly decline after several years of steady growth. Data center hardware and software encompasses spending on servers, networking, storage, and security software. Because data centers store proprietary or sensitive data, sites are secured by specific software. This includes splitting networks into security zones, for example. Other methods for ensuring security are using tools to scan applications and code before deployment to spot malware or vulnerabilities.

  15. Decision-Related Research on the Organization of Service Delivery Systems in...

    • icpsr.umich.edu
    ascii, sas, spss
    Updated Feb 16, 1992
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    O'Donoghue, Patrick (1992). Decision-Related Research on the Organization of Service Delivery Systems in Metropolitan Areas: Public Health [Dataset]. http://doi.org/10.3886/ICPSR07374.v1
    Explore at:
    spss, ascii, sasAvailable download formats
    Dataset updated
    Feb 16, 1992
    Dataset provided by
    Inter-university Consortium for Political and Social Researchhttps://www.icpsr.umich.edu/web/pages/
    Authors
    O'Donoghue, Patrick
    License

    https://www.icpsr.umich.edu/web/ICPSR/studies/7374/termshttps://www.icpsr.umich.edu/web/ICPSR/studies/7374/terms

    Time period covered
    1970 - 1975
    Area covered
    United States
    Description

    This study represents one of four research projects on service delivery systems in metropolitan areas, covering fire protection (DECISION-RELATED RESEARCH ON THE ORGANIZATION OF SERVICE DELIVERY SYSTEMS IN METROPOLITAN AREAS: FIRE PROTECTION [ICPSR 7409]), police protection (DECISION-RELATED RESEARCH ON THE ORGANIZATION OF SERVICE DELIVERY SYSTEMS IN METROPOLITAN AREAS: POLICE PROTECTION [ICPSR 7427]), solid waste management (DECISION-RELATED RESEARCH ON THE ORGANIZATION OF SERVICE DELIVERY SYSTEMS IN METROPOLITAN AREAS: SOLID WASTE MANAGEMENT [ICPSR 7487]), and public health (the present study). All four projects used a common unit of analysis, namely all 200 Standard Metropolitan Statistical Areas (SMSAs) that, according to the 1970 Census, had a population of less than 1,500,000 and were entirely located within a single state. In each project, a limited amount of information was collected for all 200 SMSAs. More extensive data were gathered within independently drawn samples of these SMSAs, for all local geographical units and each administrative jurisdiction or agency in the service delivery areas. Two standardized systems of geocoding -- the Federal Information Processing Standard (FIPS) codes and the Office of Revenue Sharing (ORS) codes -- were used, so that data from various sources could be combined. The use of these two coding schemes also allows users to combine data from two or more of the research projects conducted in conjunction with the present one, or to add data from a wide variety of public data files. The delivery of public health services was investigated in 200 SMSAs plus Minneapolis and St. Paul. The basic data collection effort involved the use of public data sources as well as proprietary data from the American Medical Association (AMA) and the Commission on Professional and Hospital Activities (CPHA). Because of the proprietary nature of some of the data and for the preservation of confidentiality, all analyses were performed at the SMSA level. Unlike the other three related research projects, the present study does not provide disaggregated units of analysis such as the administrative jurisdiction, the individual hospital, or other facilities. Variables describe the characteristics of available professionals and facilities, regulatory factors reflecting the impact of federal and state programs available in the area, and financing factors, including the coverage of state Medicaid programs, Blue Cross and Blue Shield, and Medicare programs. Information is also provided regarding the demographic and socioeconomic characteristics of the population served in each SMSA.

  16. A

    ‘Store Transaction data’ analyzed by Analyst-2

    • analyst-2.ai
    Updated Feb 14, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Analyst-2 (analyst-2.ai) / Inspirient GmbH (inspirient.com) (2022). ‘Store Transaction data’ analyzed by Analyst-2 [Dataset]. https://analyst-2.ai/analysis/kaggle-store-transaction-data-2e60/3a5df53c/?iid=007-635&v=presentation
    Explore at:
    Dataset updated
    Feb 14, 2022
    Dataset authored and provided by
    Analyst-2 (analyst-2.ai) / Inspirient GmbH (inspirient.com)
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Analysis of ‘Store Transaction data’ provided by Analyst-2 (analyst-2.ai), based on source dataset retrieved from https://www.kaggle.com/iamprateek/store-transaction-data on 14 February 2022.

    --- Dataset description provided by original source is as follows ---

    Context

    Nielsen receives transaction level scanning data (POS Data) from its partner stores on a regular basis. Stores sharing POS data include bigger format store types such as supermarkets, hypermarkets as well as smaller traditional trade grocery stores (Kirana stores), medical stores etc. using a POS machine.

    While in a bigger format store, all items for all transactions are scanned using a POS machine, smaller and more localized shops do not have a 100% compliance rate in terms of scanning and inputting information into the POS machine for all transactions.

    A transaction involving a single packet of chips or a single piece of candy may not be scanned and recorded to spare customer the inconvenience or during rush hours when the store is crowded with customers.

    Thus, the data received from such stores is often incomplete and lacks complete information of all transactions completed within a day.

    Additionally, apart from incomplete transaction data in a day, it is observed that certain stores do not share data for all active days. Stores share data ranging from 2 to 28 days in a month. While it is possible to impute/extrapolate data for 2 days of a month using 28 days of actual historical data, the vice versa is not recommended.

    Nielsen encourages you to create a model which can help impute/extrapolate data to fill in the missing data gaps in the store level POS data currently received.

    Content

    You are provided with the dataset that contains store level data by brands and categories for select stores-

    Hackathon_ Ideal_Data - The file contains brand level data for 10 stores for the last 3 months. This can be referred to as the ideal data.

    Hackathon_Working_Data - This contains data for selected stores which are missing and/or incomplete.

    Hackathon_Mapping_File - This file is provided to help understand the column names in the data set.

    Hackathon_Validation_Data - This file contains the data stores and product groups for which you have to predict the Total_VALUE.

    Sample Submission - This file represents what needs to be uploaded as output by candidate in the same format. The sample data is provided in the file to help understand the columns and values required.

    Acknowledgements

    Nielsen Holdings plc (NYSE: NLSN) is a global measurement and data analytics company that provides the most complete and trusted view available of consumers and markets worldwide. Nielsen is divided into two business units. Nielsen Global Media, the arbiter of truth for media markets, provides media and advertising industries with unbiased and reliable metrics that create a shared understanding of the industry required for markets to function. Nielsen Global Connect provides consumer packaged goods manufacturers and retailers with accurate, actionable information and insights and a complete picture of the complex and changing marketplace that companies need to innovate and grow. Our approach marries proprietary Nielsen data with other data sources to help clients around the world understand what’s happening now, what’s happening next, and how to best act on this knowledge. An S&P 500 company, Nielsen has operations in over 100 countries, covering more than 90% of the world’s population.

    Know more: https://www.nielsen.com/us/en/

    Inspiration

    Build an imputation and/or extrapolation model to fill the missing data gaps for select stores by analyzing the data and determine which factors/variables/features can help best predict the store sales.

    --- Original source retains full ownership of the source dataset ---

  17. d

    Mobile Location Data | Brazil | +100M Unique Devices | +50M Daily Users |...

    • datarade.ai
    .json, .csv, .xls
    Updated Mar 20, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Quadrant (2025). Mobile Location Data | Brazil | +100M Unique Devices | +50M Daily Users | +50B Events / Month [Dataset]. https://datarade.ai/data-products/mobile-location-data-brazil-100m-unique-devices-50m-d-quadrant
    Explore at:
    .json, .csv, .xlsAvailable download formats
    Dataset updated
    Mar 20, 2025
    Dataset authored and provided by
    Quadrant
    Area covered
    Brazil
    Description

    Quadrant provides Insightful, accurate, and reliable mobile location data.

    Our privacy-first mobile location data unveils hidden patterns and opportunities, provides actionable insights, and fuels data-driven decision-making at the world's biggest companies.

    These companies rely on our privacy-first Mobile Location and Points-of-Interest Data to unveil hidden patterns and opportunities, provide actionable insights, and fuel data-driven decision-making. They build better AI models, uncover business insights, and enable location-based services using our robust and reliable real-world data.

    We conduct stringent evaluations on data providers to ensure authenticity and quality. Our proprietary algorithms detect, and cleanse corrupted and duplicated data points – allowing you to leverage our datasets rapidly with minimal processing or cleaning. During the ingestion process, our proprietary Data Filtering Algorithms remove events based on a number of both qualitative factors, as well as latency and other integrity variables to provide more efficient data delivery. The deduplicating algorithm focuses on a combination of four important attributes: Device ID, Latitude, Longitude, and Timestamp. This algorithm scours our data and identifies rows that contain the same combination of these four attributes. Post-identification, it retains a single copy and eliminates duplicate values to ensure our customers only receive complete and unique datasets.

    We actively identify overlapping values at the provider level to determine the value each offers. Our data science team has developed a sophisticated overlap analysis model that helps us maintain a high-quality data feed by qualifying providers based on unique data values rather than volumes alone – measures that provide significant benefit to our end-use partners.

    Quadrant mobility data contains all standard attributes such as Device ID, Latitude, Longitude, Timestamp, Horizontal Accuracy, and IP Address, and non-standard attributes such as Geohash and H3. In addition, we have historical data available back through 2022.

    Through our in-house data science team, we offer sophisticated technical documentation, location data algorithms, and queries that help data buyers get a head start on their analyses. Our goal is to provide you with data that is “fit for purpose”.

  18. d

    International Cigarette Consumption Database v1.3

    • search.dataone.org
    • borealisdata.ca
    Updated Dec 28, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Poirier, Mathieu JP; Guindon, G Emmanuel; Sritharan, Lathika; Hoffman, Steven J (2023). International Cigarette Consumption Database v1.3 [Dataset]. http://doi.org/10.5683/SP2/AOVUW7
    Explore at:
    Dataset updated
    Dec 28, 2023
    Dataset provided by
    Borealis
    Authors
    Poirier, Mathieu JP; Guindon, G Emmanuel; Sritharan, Lathika; Hoffman, Steven J
    Time period covered
    Jan 1, 1970 - Jan 1, 2015
    Description

    This database contains tobacco consumption data from 1970-2015 collected through a systematic search coupled with consultation with country and subject-matter experts. Data quality appraisal was conducted by at least two research team members in duplicate, with greater weight given to official government sources. All data was standardized into units of cigarettes consumed and a detailed accounting of data quality and sourcing was prepared. Data was found for 82 of 214 countries for which searches for national cigarette consumption data were conducted, representing over 95% of global cigarette consumption and 85% of the world’s population. Cigarette consumption fell in most countries over the past three decades but trends in country specific consumption were highly variable. For example, China consumed 2.5 million metric tonnes (MMT) of cigarettes in 2013, more than Russia (0.36 MMT), the United States (0.28 MMT), Indonesia (0.28 MMT), Japan (0.20 MMT), and the next 35 highest consuming countries combined. The US and Japan achieved reductions of more than 0.1 MMT from a decade earlier, whereas Russian consumption plateaued, and Chinese and Indonesian consumption increased by 0.75 MMT and 0.1 MMT, respectively. These data generally concord with modelled country level data from the Institute for Health Metrics and Evaluation and have the additional advantage of not smoothing year-over-year discontinuities that are necessary for robust quasi-experimental impact evaluations. Before this study, publicly available data on cigarette consumption have been limited—either inappropriate for quasi-experimental impact evaluations (modelled data), held privately by companies (proprietary data), or widely dispersed across many national statistical agencies and research organisations (disaggregated data). This new dataset confirms that cigarette consumption has decreased in most countries over the past three decades, but that secular country specific consumption trends are highly variable. The findings underscore the need for more robust processes in data reporting, ideally built into international legal instruments or other mandated processes. To monitor the impact of the WHO Framework Convention on Tobacco Control and other tobacco control interventions, data on national tobacco production, trade, and sales should be routinely collected and openly reported. The first use of this database for a quasi-experimental impact evaluation of the WHO Framework Convention on Tobacco Control is: Hoffman SJ, Poirier MJP, Katwyk SRV, Baral P, Sritharan L. Impact of the WHO Framework Convention on Tobacco Control on global cigarette consumption: quasi-experimental evaluations using interrupted time series analysis and in-sample forecast event modelling. BMJ. 2019 Jun 19;365:l2287. doi: https://doi.org/10.1136/bmj.l2287 Another use of this database was to systematically code and classify longitudinal cigarette consumption trajectories in European countries since 1970 in: Poirier MJ, Lin G, Watson LK, Hoffman SJ. Classifying European cigarette consumption trajectories from 1970 to 2015. Tobacco Control. 2022 Jan. DOI: 10.1136/tobaccocontrol-2021-056627. Statement of Contributions: Conceived the study: GEG, SJH Identified multi-country datasets: GEG, MP Extracted data from multi-country datasets: MP Quality assessment of data: MP, GEG Selection of data for final analysis: MP, GEG Data cleaning and management: MP, GL Internet searches: MP (English, French, Spanish, Portuguese), GEG (English, French), MYS (Chinese), SKA (Persian), SFK (Arabic); AG, EG, BL, MM, YM, NN, EN, HR, KV, CW, and JW (English), GL (English) Identification of key informants: GEG, GP Project Management: LS, JM, MP, SJH, GEG Contacts with Statistical Agencies: MP, GEG, MYS, SKA, SFK, GP, BL, MM, YM, NN, HR, KV, JW, GL Contacts with key informants: GEG, MP, GP, MYS, GP Funding: GEG, SJH SJH: Hoffman, SJ; JM: Mammone J; SRVK: Rogers Van Katwyk, S; LS: Sritharan, L; MT: Tran, M; SAK: Al-Khateeb, S; AG: Grjibovski, A.; EG: Gunn, E; SKA: Kamali-Anaraki, S; BL: Li, B; MM: Mahendren, M; YM: Mansoor, Y; NN: Natt, N; EN: Nwokoro, E; HR: Randhawa, H; MYS: Yunju Song, M; KV: Vercammen, K; CW: Wang, C; JW: Woo, J; MJPP: Poirier, MJP; GEG: Guindon, EG; GP: Paraje, G; GL Gigi Lin Key informants who provided data: Corne van Walbeek (South Africa, Jamaica) Frank Chaloupka (US) Ayda Yurekli (Turkey) Dardo Curti (Uruguay) Bungon Ritthiphakdee (Thailand) Jakub Lobaszewski (Poland) Guillermo Paraje (Chile, Argentina) Key informants who provided useful insights: Carlos Manuel Guerrero López (Mexico) Muhammad Jami Husain (Bangladesh) Nigar Nargis (Bangladesh) Rijo M John (India) Evan Blecher (Nigeria, Indonesia, Philippines, South Africa) Yagya Karki (Nepal) Anne CK Quah (Malaysia) Nery Suarez Lugo (Cuba) Agencies providing assistance: Irani... Visit https://dataone.org/datasets/sha256%3Aaa1b4aae69c3399c96bfbf946da54abd8f7642332d12ccd150c42ad400e9699b for complete metadata about this dataset.

  19. Data For: More Readers in More Places

    • zenodo.org
    • data.niaid.nih.gov
    bin, csv, pdf, zip
    Updated Jul 19, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alkim Ozaygen; Alkim Ozaygen; Cameron Neylon; Cameron Neylon (2024). Data For: More Readers in More Places [Dataset]. http://doi.org/10.5281/zenodo.4018842
    Explore at:
    csv, pdf, zip, binAvailable download formats
    Dataset updated
    Jul 19, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Alkim Ozaygen; Alkim Ozaygen; Cameron Neylon; Cameron Neylon
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This file describes data files provided as part of the preprint "More readers in more places: The benefits of open access for scholarly books" and as supporting information for the report "Diversifying readership through open access: A usage analysis for OA books".

    We provide the list of titles used in the study, the citation data, and webometrics in the processed form used in the article. The main data for the paper is usage and other collected data. This data is proprietary to Springer Nature. A short example of the data format is provided. For each table we provide a checksum of the data table (calculated as described below) held as a cloud SQL database table for processing.

  20. d

    Existing Multifamily Housing Sites

    • data.detroitmi.gov
    • detroitdata.org
    • +3more
    Updated Sep 15, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    City of Detroit (2023). Existing Multifamily Housing Sites [Dataset]. https://data.detroitmi.gov/datasets/10258475651647b78825a5e5765a6c1f
    Explore at:
    Dataset updated
    Sep 15, 2023
    Dataset authored and provided by
    City of Detroit
    Area covered
    Description

    This dataset contains existing multifamily rental sites in the City of Detroit with housing units that have been preserved as affordable since 2018 with assistance from the public sector.Over time, affordable units are at risk of falling off line, either due to obsolescence or conversion to market-rate rents. This dataset contains occupied multifamily rental housing sites (typically 5+ units) in the City of Detroit, including those that have units that have been preserved as affordable since 2015 through public funding, regulatory agreements, and other means of assistance from the public sector. Data are collected from developers, other governmental departments and agencies, and proprietary data sources by various teams within the Housing and Revitalization Department, led by the Preservation Team. Data have been tracked since 2018 in service of citywide housing preservation goals. This reflects HRD's current knowledge of multifamily units in the city and will be updated as the department's knowledge changes. For more information about the City's multifamily affordable housing policies and goals, visit here.Affordability level for affordable units are measured by the percentage of the Area Median Income (AMI) that a household could earn for that unit to be considered affordable for them. For example, a unit that rents at a 60% AMI threshold would be affordable to a household earning 60% or less of the median income for the area. Rent affordability is typically defined as housing costs consuming 30% or less of monthly income. Regulated housing programs are designed to serve households based on certain income benchmarks relative to AMI, and these income benchmarks vary based on household size. Detroit city's AMI levels are set by the Department of Housing and Urban Development (HUD) for the Detroit-Warren-Livonia, MI Metro Fair Market Rent (FMR) area. For more information on AMI in Detroit, visit here.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Lucror Analytics (2024). Fixed Income Data | Financial Models | 400+ Issuers | High Yield | Fundamental Analysis | Analyst-adjusted | Europe, Asia, LatAm | Financial Modelling [Dataset]. https://datarade.ai/data-products/lucror-analytics-corporate-data-financial-models-400-b-lucror-analytics

Fixed Income Data | Financial Models | 400+ Issuers | High Yield | Fundamental Analysis | Analyst-adjusted | Europe, Asia, LatAm | Financial Modelling

Explore at:
.csv, .xlsAvailable download formats
Dataset updated
Dec 6, 2024
Dataset authored and provided by
Lucror Analytics
Area covered
China, Bonaire, Gibraltar, Croatia, Guatemala, Lebanon, State of, Sri Lanka, Dominican Republic, India
Description

Lucror Analytics: Fundamental Fixed Income Data and Financial Models for High-Yield Bond Issuers

At Lucror Analytics, we deliver expertly curated data solutions focused on corporate credit and high-yield bond issuers across Europe, Asia, and Latin America. Our data offerings integrate comprehensive fundamental analysis, financial models, and analyst-adjusted insights tailored to support professionals in the credit and fixed-income sectors. Covering 400+ bond issuers, our datasets provide a high level of granularity, empowering asset managers, institutional investors, and financial analysts to make informed decisions with confidence.

By combining proprietary financial models with expert analysis, we ensure our Fixed Income Data is actionable, precise, and relevant. Whether you're conducting credit risk assessments, building portfolios, or identifying investment opportunities, Lucror Analytics offers the tools you need to navigate the complexities of high-yield markets.

What Makes Lucror’s Fixed Income Data Unique?

Comprehensive Fundamental Analysis Our datasets focus on issuer-level credit data for complex high-yield bond issuers. Through rigorous fundamental analysis, we provide deep insights into financial performance, credit quality, and key operational metrics. This approach equips users with the critical information needed to assess risk and uncover opportunities in volatile markets.

Analyst-Adjusted Insights Our data isn’t just raw numbers—it’s refined through the expertise of seasoned credit analysts with 14 years average fixed income experience. Each dataset is carefully reviewed and adjusted to reflect real-world conditions, providing clients with actionable intelligence that goes beyond automated outputs.

Focus on High-Yield Markets Lucror’s specialization in high-yield markets across Europe, Asia, and Latin America allows us to offer a targeted and detailed dataset. This focus ensures that our clients gain unparalleled insights into some of the most dynamic and complex credit markets globally.

How Is the Data Sourced? Lucror Analytics employs a robust and transparent methodology to source, refine, and deliver high-quality data:

  • Public Sources: Includes issuer filings, bond prospectuses, financial reports, and market data.
  • Proprietary Analysis: Leveraging proprietary models, our team enriches raw data to provide actionable insights.
  • Expert Review: Data is validated and adjusted by experienced analysts to ensure accuracy and relevance.
  • Regular Updates: Models are continuously updated to reflect market movements, regulatory changes, and issuer-specific developments.

This rigorous process ensures that our data is both reliable and actionable, enabling clients to base their decisions on solid foundations.

Primary Use Cases 1. Fundamental Research Institutional investors and analysts rely on our data to conduct deep-dive research into specific issuers and sectors. The combination of raw data, adjusted insights, and financial models provides a comprehensive foundation for decision-making.

  1. Credit Risk Assessment Lucror’s financial models provide detailed credit risk evaluations, enabling investors to identify potential vulnerabilities and mitigate exposure. Analyst-adjusted insights offer a nuanced understanding of creditworthiness, making it easier to distinguish between similar issuers.

  2. Portfolio Management Lucror’s datasets support the development of diversified, high-performing portfolios. By combining issuer-level data with robust financial models, asset managers can balance risk and return while staying aligned with investment mandates.

  3. Strategic Decision-Making From assessing market trends to evaluating individual issuers, Lucror’s data empowers organizations to make informed, strategic decisions. The regional focus on Europe, Asia, and Latin America offers unique insights into high-growth and high-risk markets.

Key Features of Lucror’s Data - 400+ High-Yield Bond Issuers: Coverage across Europe, Asia, and Latin America ensures relevance in key regions. - Proprietary Financial Models: Created by one of the best independent analyst teams on the street. - Analyst-Adjusted Data: Insights refined by experts to reflect off-balance sheet items and idiosyncrasies. - Customizable Delivery: Data is provided in formats and frequencies tailored to the needs of individual clients.

Why Choose Lucror Analytics? Lucror Analytics and independent provider free from conflicts of interest. We are committed to delivering high-quality financial models for credit and fixed-income professionals. Our proprietary approach combines proprietary models with expert insights, ensuring accuracy, relevance, and utility.

By partnering with Lucror Analytics, you can: - Safe costs and create internal efficiencies by outsourcing a highly involved and time-consuming processes, including financial analysis and modelling. - Enhance your credit risk ...

Search
Clear search
Close search
Google apps
Main menu