70 datasets found
  1. d

    Replication Data for: Balance as a Pre-Estimation Test for Time Series...

    • dataone.org
    • dataverse.harvard.edu
    Updated Nov 13, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pickup, Mark; Kellstedt, Paul (2023). Replication Data for: Balance as a Pre-Estimation Test for Time Series Analysis [Dataset]. http://doi.org/10.7910/DVN/G0XXSE
    Explore at:
    Dataset updated
    Nov 13, 2023
    Dataset provided by
    Harvard Dataverse
    Authors
    Pickup, Mark; Kellstedt, Paul
    Description

    It is understood that ensuring equation balance is a necessary condition for a valid model of times series data. Yet, the definition of balance provided so far has been incomplete and there has not been a consistent understanding of exactly why balance is important or how it can be applied. The discussion to date has focused on the estimates produced by the GECM. In this paper, we go beyond the GECM and be- yond model estimates. We treat equation balance as a theoretical matter, not merely an empirical one, and describe how to use the concept of balance to test theoretical propositions before longitudinal data have been gathered. We explain how equation balance can be used to check if your theoretical or empirical model is either wrong or incomplete in a way that will prevent a meaningful interpretation of the model. We also raise the issue of “I(0) balance” and its importance. The replication dataset includes the Stata .do file and .dta file to replicate the analysis in section 4.1 of the Supplementary Information.

  2. d

    Replication Data for 'Gender (im)balance in the Russian cinema: on the...

    • search.dataone.org
    Updated Sep 24, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Leontyeva, Xenia (2024). Replication Data for 'Gender (im)balance in the Russian cinema: on the screen and behind the camera' [Dataset]. http://doi.org/10.7910/DVN/ISVTB4
    Explore at:
    Dataset updated
    Sep 24, 2024
    Dataset provided by
    Harvard Dataverse
    Authors
    Leontyeva, Xenia
    Description

    There are two CSV datasets in this publication used initially in the master thesis in sociology of Xenia Leontyeva at HSE University Saint Petersburg, titled "Popularity Factors of Domestic Films: Gender Characteristics and State Support Measures" (2022), and lately for the article by Leontyeva, Xenia, Olessia Koltsova, and Deb Verhoeven, titled "Gender (Im)Balance in Russian Cinema: On the Screen and behind the Camera" (Accepted in January 2024 in The Journal of Cultural Analytics). The first dataset (N=1285) includes all Russian films produced between 2008 and 2019 and theatrically released between December 1, 2008, and December 31, 2019. Distribution statistics cover the territory of the CIS, of which the Russian Federation is the biggest market. Budget information is available for 644 films. The second dataset contains the Bechdel-Wallace test modified by Leontyeva markup for 243 films, 193 of which have budget information. There is also a supplement with a detailed description of all variables and R-code producing tables, plots, and models for the article. The database was collected by Xenia Leontyeva while working at Nevafilm Research (until 2018) and later. In terms of distribution data, it is based on sources such as the open base Russian Cinema Fund Analytics – RCFA (since 2015), the closed base comScore/Rentrak ("International Box Office Essential") serving major Hollywood studios (data from it has been used since 2008 to fill gaps in open databases), Bookers' Bulletin (since 2011), and Russian Film Business Today magazines (since 2004), as well as self-collected by Nevafilm Research employees from film distributors and producers; the rights to use and continue this dataset have been received from Nevafilm company. In terms of production data, the information was taken from the State register of film distribution certificates, Kinopoisk.ru, and from the films' credits.

  3. e

    Data for Lake Mendota Phosphorus Cycling Model

    • portal.edirepository.org
    csv
    Updated Feb 28, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Paul Hanson; Aviah Stillman (2019). Data for Lake Mendota Phosphorus Cycling Model [Dataset]. http://doi.org/10.6073/pasta/36d0ee7bf67d9dabade404c92be73917
    Explore at:
    csv(60329 byte), csv(60377 byte), csv(60167 byte), csv(293193 byte), csv(60214 byte), csv(5700 byte), csv(4841 byte), csv(1637274 byte), csv(60409 byte), csv(11426 byte), csv(1621674 byte), csv(613395 byte), csv(605195 byte), csv(60436 byte), csv(60549 byte), csv(59825 byte), csv(60394 byte), csv(61187 byte), csv(187208 byte), csv(304976 byte), csv(814379 byte), csv(60420 byte), csv(425385 byte), csv(60692 byte)Available download formats
    Dataset updated
    Feb 28, 2019
    Dataset provided by
    EDI
    Authors
    Paul Hanson; Aviah Stillman
    Time period covered
    May 9, 1995 - Dec 31, 2015
    Area covered
    Variables measured
    day, Date, EpiP, FLOW, SALT, TEMP, time, HypoP, PLoad, depth, and 45 more
    Description

    There is an opportunity to advance both prediction accuracy and scientific discovery for phosphorus cycling in Lake Mendota (Wisconsin, USA). Twenty years of phosphorus measurements show patterns at seasonal to decadal scales, suggesting a variety of drivers control lake phosphorus dynamics. Our objectives are to produce a phosphorus budget for Lake Mendota and to accurately predict summertime epilimnetic phosphorus using a simple and adaptable modeling approach. We combined ecological knowledge with machine learning in the emerging paradigm, theory-guided data science (TGDS). A mass balance model (PROCESS) accounted for most of the observed pattern in lake phosphorus. However, inclusion of machine learning (RNN) and an ecological principle (PGRNN) to constrain its output improved summertime phosphorus predictions and accounted for long term changes missed by the mass balance model. TGDS indicated additional processes related to water temperature, thermal stratification, and long term changes in external loads are needed to improve our mass balance modeling approach.

  4. LTFS Data Science FinHack 3(Analytics Vidhya)

    • kaggle.com
    Updated Feb 1, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Parv619 (2021). LTFS Data Science FinHack 3(Analytics Vidhya) [Dataset]. https://www.kaggle.com/parv619/ltfs-data-science-finhack-3analytics-vidhya/metadata
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Feb 1, 2021
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Parv619
    Description

    This dataset contains extracted data from LTFS Data Science FinHack 3 (Analytics Vidhya)

    LTFS Top-up loan Up-sell prediction

    A loan is when you receive the money from a financial institution in exchange for future repayment of the principal, plus interest. Financial institutions provide loans to the industries, corporates and individuals. The interest received on these loans is one among the main sources of income for the financial institutions.

    A top-up loan, true to its name, is a facility of availing further funds on an existing loan. When you have a loan that has already been disbursed and under repayment and if you need more funds then, you can simply avail additional funding on the same loan thereby minimizing time, effort and cost related to applying again.

    LTFS provides it’s loan services to its customers and is interested in selling more of its Top-up loan services to its existing customers so they have decided to identify when to pitch a Top-up during the original loan tenure. If they correctly identify the most suitable time to offer a top-up, this will ultimately lead to more disbursals and can also help them beat competing offerings from other institutions.

    To understand this behaviour, LTFS has provided data for its customers containing the information whether that particular customer took the Top-up service and when he took such Top-up service, represented by the target variable Top-up Month.

    You are provided with two types of information:

    1. Customer’s Demographics: The demography table along with the target variable & demographic information contains variables related to Frequency of the loan, Tenure of the loan, Disbursal Amount for a loan & LTV.

    2. Bureau data: Bureau data contains the behavioural and transactional attributes of the customers like current balance, Loan Amount, Overdue etc. for various tradelines of a given customer

    As a data scientist, LTFS has tasked you with building a model given the Top-up loan bucket of 128655 customers along with demographic and bureau data, predict the right bucket/period for 14745 customers in the test data.

    Important Note

    Note that feasibility of implementation of top solutions in real production scenario will be considered while adjudging winners and can change the final standing for Prize Eligibility

    Data Dictionary

    Train_Data.zip This zip file contains the train files for demography data and bureau data. The data dictionary is also included here.

    Test_Data.zip This zip file contains information on demography data and bureau data for a different set of customers

    Sample Submission This file contains the exact submission format for the predictions. Please submit CSV file only.

    Variable Definition ID Unique Identifier for a row Top-up Month (Target) bucket/period for the Top-up Loan

    How to Make a Submission?

    All Submissions are to be done at the solution checker tab. For a step by step view on how to make a submission check the below video

    Evaluation

    The evaluation metric for this competition is macro_f1_score across all entries in the test set.

    Public and Private Split Test data is further divided into Public 40% and Private 60%

    Your initial responses will be checked and scored on the Public data. The final rankings would be based on your private score which will be published once the competition is over.

    Guidelines for Final Submission

    Please ensure that your final submission includes the following:

    Solution file containing the predicted Top-up Month bucket in the test dataset (format is given in sample submission CSV) Code file containing the following: Code: Note that it is mandatory to submit your code for a valid final submission Approach: Please share your approach to solve the problem (doc/ppt/pdf format). It should cover the following topics: A brief on the approach, which you have used to solve the problem. What data-preprocessing / feature engineering ideas really worked? How did you discover them? What does your final model look like? How did you reach it?

    How to Set Final Submission?

    Hackathon Rules The final standings would be based on private leaderboard score and presentations made in Online Interview round with LTFS & Analytics Vidhya which will be held after contest close. Setting the final submission is recommended. Without a final submission, the submission corresponding to best public score will be taken as the final submission Use of external data is prohibited You can only make 10 submissions per day Entries submitted after the contest is closed, will not be considered The code file pertaining to your final submission is mandatory while setting final submission Throughout the hackathon, you are expected to respect fellow hackers and act with high integrity. Analytics Vidhya and LTFS hold the right to disqualify any participant at any stage of the compe...

  5. d

    Replication Data for: Figure 1.4 Density Balance Index scores by city size...

    • search.dataone.org
    • borealisdata.ca
    Updated Dec 28, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Taylor, Zack (2023). Replication Data for: Figure 1.4 Density Balance Index scores by city size group, 1970-2010 [Dataset]. http://doi.org/10.5683/SP2/W0BBB6
    Explore at:
    Dataset updated
    Dec 28, 2023
    Dataset provided by
    Borealis
    Authors
    Taylor, Zack
    Description

    Script graphs box plots of DBI scores for all metro areas, grouping by year and metropolitan area population size (larger or smaller than 250,000 people). Additional scripts create different graphs. Data are provided in both "long" and "tall" formats.

  6. i

    Research data associated to crossover-mutation interaction balance...

    • ieee-dataport.org
    Updated Aug 9, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ashenafi Mehahri (2023). Research data associated to crossover-mutation interaction balance experiment [Dataset]. https://ieee-dataport.org/documents/research-data-associated-crossover-mutation-interaction-balance-experiment
    Explore at:
    Dataset updated
    Aug 9, 2023
    Authors
    Ashenafi Mehahri
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    the article intends to shed some light on the way crossover-mutation intraction balannce in GA could be comprehended in the view tthat current literature is short of providing generalized rule that guides determination of tha balance of the two operators across domains of problems.

  7. d

    Replication Data for \"Beyond the Balance Sheet Model of Banking:...

    • search.dataone.org
    • dataverse.harvard.edu
    Updated Nov 8, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Seru, Amit; Buchak, Greg; Matvos, Gregor; Piskorski, Tomasz (2023). Replication Data for \"Beyond the Balance Sheet Model of Banking: Implications for Bank Regulation and Monetary Policy\" [Dataset]. http://doi.org/10.7910/DVN/4NUQE3
    Explore at:
    Dataset updated
    Nov 8, 2023
    Dataset provided by
    Harvard Dataverse
    Authors
    Seru, Amit; Buchak, Greg; Matvos, Gregor; Piskorski, Tomasz
    Description

    This is the replication package for "Beyond the Balance Sheet Model of Banking: Implications for Bank Regulation and Monetary Policy," accepted in 2023 by the Journal of Political Economy.

  8. d

    Replication Data for: Understanding Equation Balance in Time Series...

    • search.dataone.org
    Updated Nov 22, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Enns, Peter; Wlezien, Christopher (2023). Replication Data for: Understanding Equation Balance in Time Series Regression [Dataset]. http://doi.org/10.7910/DVN/E3AVU6
    Explore at:
    Dataset updated
    Nov 22, 2023
    Dataset provided by
    Harvard Dataverse
    Authors
    Enns, Peter; Wlezien, Christopher
    Description

    Replication data and simulation code.. Visit https://dataone.org/datasets/sha256%3Ab5cb8e87ab2a186c7acfae999fa64e855665e9de74e2699afc1ed56daa10054c for complete metadata about this dataset.

  9. Economic Data

    • lseg.com
    Updated Nov 19, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    LSEG (2023). Economic Data [Dataset]. https://www.lseg.com/en/data-analytics/financial-data/economic-data
    Explore at:
    Dataset updated
    Nov 19, 2023
    Dataset provided by
    London Stock Exchange Grouphttp://www.londonstockexchangegroup.com/
    Authors
    LSEG
    License

    https://www.lseg.com/en/policies/website-disclaimerhttps://www.lseg.com/en/policies/website-disclaimer

    Description

    View LSEG's extensive Economic Data, including content that allows the analysis and monitoring of national economies with historical and real-time series.

  10. Bank Transaction Dataset for Fraud Detection

    • kaggle.com
    Updated Nov 4, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    vala khorasani (2024). Bank Transaction Dataset for Fraud Detection [Dataset]. https://www.kaggle.com/datasets/valakhorasani/bank-transaction-dataset-for-fraud-detection
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Nov 4, 2024
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    vala khorasani
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    This dataset provides a detailed look into transactional behavior and financial activity patterns, ideal for exploring fraud detection and anomaly identification. It contains 2,512 samples of transaction data, covering various transaction attributes, customer demographics, and usage patterns. Each entry offers comprehensive insights into transaction behavior, enabling analysis for financial security and fraud detection applications.

    Key Features:

    • TransactionID: Unique alphanumeric identifier for each transaction.
    • AccountID: Unique identifier for each account, with multiple transactions per account.
    • TransactionAmount: Monetary value of each transaction, ranging from small everyday expenses to larger purchases.
    • TransactionDate: Timestamp of each transaction, capturing date and time.
    • TransactionType: Categorical field indicating 'Credit' or 'Debit' transactions.
    • Location: Geographic location of the transaction, represented by U.S. city names.
    • DeviceID: Alphanumeric identifier for devices used to perform the transaction.
    • IP Address: IPv4 address associated with the transaction, with occasional changes for some accounts.
    • MerchantID: Unique identifier for merchants, showing preferred and outlier merchants for each account.
    • AccountBalance: Balance in the account post-transaction, with logical correlations based on transaction type and amount.
    • PreviousTransactionDate: Timestamp of the last transaction for the account, aiding in calculating transaction frequency.
    • Channel: Channel through which the transaction was performed (e.g., Online, ATM, Branch).
    • CustomerAge: Age of the account holder, with logical groupings based on occupation.
    • CustomerOccupation: Occupation of the account holder (e.g., Doctor, Engineer, Student, Retired), reflecting income patterns.
    • TransactionDuration: Duration of the transaction in seconds, varying by transaction type.
    • LoginAttempts: Number of login attempts before the transaction, with higher values indicating potential anomalies.

    This dataset is ideal for data scientists, financial analysts, and researchers looking to analyze transactional patterns, detect fraud, and build predictive models for financial security applications. The dataset was designed for machine learning and pattern analysis tasks and is not intended as a primary data source for academic publications.

  11. U

    Fena Valley Reservoir water-balance model, FVR_2016

    • data.usgs.gov
    • datasets.ai
    • +2more
    Updated Aug 27, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sarah Rosa; Lauren Hay (2020). Fena Valley Reservoir water-balance model, FVR_2016 [Dataset]. http://doi.org/10.5066/F7HH6HV4
    Explore at:
    Dataset updated
    Aug 27, 2020
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Authors
    Sarah Rosa; Lauren Hay
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Time period covered
    2017
    Description

    The FVR_2016 folder contains the input files needed to run the Fena Valley Reservoir water-balance model and a README_FVR_2016.txt document that describes the contents of this archive and the execution of the water-balance model.

  12. B

    Balance Sheet Management Report

    • datainsightsmarket.com
    doc, pdf, ppt
    Updated Jun 18, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Data Insights Market (2025). Balance Sheet Management Report [Dataset]. https://www.datainsightsmarket.com/reports/balance-sheet-management-502773
    Explore at:
    doc, pdf, pptAvailable download formats
    Dataset updated
    Jun 18, 2025
    Dataset authored and provided by
    Data Insights Market
    License

    https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The global Balance Sheet Management (BSM) market is experiencing robust growth, driven by increasing regulatory scrutiny, the need for enhanced financial reporting accuracy, and the rising adoption of advanced technologies like AI and machine learning. The market, estimated at $5 billion in 2025, is projected to exhibit a Compound Annual Growth Rate (CAGR) of 12% from 2025 to 2033, reaching approximately $12 billion by 2033. This growth is fueled by the expanding adoption of cloud-based BSM solutions, which offer scalability, flexibility, and cost-effectiveness compared to on-premise systems. Furthermore, the increasing demand for real-time financial insights and improved risk management capabilities is further propelling market expansion. Key players like BCG, Oracle, and Moody's are leveraging their expertise to offer comprehensive BSM solutions encompassing data analytics, automation, and compliance features. The market is segmented by deployment (cloud, on-premise), by organization size (SMEs, large enterprises), and by industry vertical (banking, finance, insurance, etc.), each exhibiting unique growth trajectories. The growth of the BSM market is influenced by several trends including the increasing adoption of automation, the growing preference for cloud-based solutions, and the increasing focus on data analytics for improved decision-making. However, certain restraints such as high implementation costs, data security concerns, and the lack of skilled professionals can hinder market growth. Despite these challenges, the long-term prospects for the BSM market remain positive, driven by the ever-increasing need for efficient and accurate balance sheet management in a complex and dynamic regulatory environment. The integration of advanced technologies and the continuous evolution of compliance requirements will shape the future trajectory of the BSM market, creating new opportunities for vendors and service providers.

  13. d

    Fixed Income Data | Financial Models | 400+ Issuers | High Yield |...

    • datarade.ai
    .csv, .xls
    Updated Dec 6, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lucror Analytics (2024). Fixed Income Data | Financial Models | 400+ Issuers | High Yield | Fundamental Analysis | Analyst-adjusted | Europe, Asia, LatAm | Financial Modelling [Dataset]. https://datarade.ai/data-products/lucror-analytics-corporate-data-financial-models-400-b-lucror-analytics
    Explore at:
    .csv, .xlsAvailable download formats
    Dataset updated
    Dec 6, 2024
    Dataset authored and provided by
    Lucror Analytics
    Area covered
    China, Croatia, Bonaire, State of, Sri Lanka, India, Dominican Republic, Guatemala, Gibraltar, Lebanon
    Description

    Lucror Analytics: Fundamental Fixed Income Data and Financial Models for High-Yield Bond Issuers

    At Lucror Analytics, we deliver expertly curated data solutions focused on corporate credit and high-yield bond issuers across Europe, Asia, and Latin America. Our data offerings integrate comprehensive fundamental analysis, financial models, and analyst-adjusted insights tailored to support professionals in the credit and fixed-income sectors. Covering 400+ bond issuers, our datasets provide a high level of granularity, empowering asset managers, institutional investors, and financial analysts to make informed decisions with confidence.

    By combining proprietary financial models with expert analysis, we ensure our Fixed Income Data is actionable, precise, and relevant. Whether you're conducting credit risk assessments, building portfolios, or identifying investment opportunities, Lucror Analytics offers the tools you need to navigate the complexities of high-yield markets.

    What Makes Lucror’s Fixed Income Data Unique?

    Comprehensive Fundamental Analysis Our datasets focus on issuer-level credit data for complex high-yield bond issuers. Through rigorous fundamental analysis, we provide deep insights into financial performance, credit quality, and key operational metrics. This approach equips users with the critical information needed to assess risk and uncover opportunities in volatile markets.

    Analyst-Adjusted Insights Our data isn’t just raw numbers—it’s refined through the expertise of seasoned credit analysts with 14 years average fixed income experience. Each dataset is carefully reviewed and adjusted to reflect real-world conditions, providing clients with actionable intelligence that goes beyond automated outputs.

    Focus on High-Yield Markets Lucror’s specialization in high-yield markets across Europe, Asia, and Latin America allows us to offer a targeted and detailed dataset. This focus ensures that our clients gain unparalleled insights into some of the most dynamic and complex credit markets globally.

    How Is the Data Sourced? Lucror Analytics employs a robust and transparent methodology to source, refine, and deliver high-quality data:

    • Public Sources: Includes issuer filings, bond prospectuses, financial reports, and market data.
    • Proprietary Analysis: Leveraging proprietary models, our team enriches raw data to provide actionable insights.
    • Expert Review: Data is validated and adjusted by experienced analysts to ensure accuracy and relevance.
    • Regular Updates: Models are continuously updated to reflect market movements, regulatory changes, and issuer-specific developments.

    This rigorous process ensures that our data is both reliable and actionable, enabling clients to base their decisions on solid foundations.

    Primary Use Cases 1. Fundamental Research Institutional investors and analysts rely on our data to conduct deep-dive research into specific issuers and sectors. The combination of raw data, adjusted insights, and financial models provides a comprehensive foundation for decision-making.

    1. Credit Risk Assessment Lucror’s financial models provide detailed credit risk evaluations, enabling investors to identify potential vulnerabilities and mitigate exposure. Analyst-adjusted insights offer a nuanced understanding of creditworthiness, making it easier to distinguish between similar issuers.

    2. Portfolio Management Lucror’s datasets support the development of diversified, high-performing portfolios. By combining issuer-level data with robust financial models, asset managers can balance risk and return while staying aligned with investment mandates.

    3. Strategic Decision-Making From assessing market trends to evaluating individual issuers, Lucror’s data empowers organizations to make informed, strategic decisions. The regional focus on Europe, Asia, and Latin America offers unique insights into high-growth and high-risk markets.

    Key Features of Lucror’s Data - 400+ High-Yield Bond Issuers: Coverage across Europe, Asia, and Latin America ensures relevance in key regions. - Proprietary Financial Models: Created by one of the best independent analyst teams on the street. - Analyst-Adjusted Data: Insights refined by experts to reflect off-balance sheet items and idiosyncrasies. - Customizable Delivery: Data is provided in formats and frequencies tailored to the needs of individual clients.

    Why Choose Lucror Analytics? Lucror Analytics and independent provider free from conflicts of interest. We are committed to delivering high-quality financial models for credit and fixed-income professionals. Our proprietary approach combines proprietary models with expert insights, ensuring accuracy, relevance, and utility.

    By partnering with Lucror Analytics, you can: - Safe costs and create internal efficiencies by outsourcing a highly involved and time-consuming processes, including financial analysis and modelling. - Enhance your credit risk ...

  14. n

    Change and variability of Arctic Systems Nordaustlandet, Svalbard -...

    • access.earthdata.nasa.gov
    Updated Apr 20, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2017). Change and variability of Arctic Systems Nordaustlandet, Svalbard - "Kinnvika" (Sweden) [Dataset]. https://access.earthdata.nasa.gov/collections/C1214591091-SCIOPS
    Explore at:
    Dataset updated
    Apr 20, 2017
    Time period covered
    Apr 28, 2008 - May 9, 2008
    Area covered
    Description

    DGPS data from surveys on the Svalbard ice cap Vestfonna spring 2008 1) kinematic profiles along the ridges and 2) static data from mass balance markers.

  15. H

    Replication Data for: "Genocidal Consolidation: Final Solutions to Elite...

    • dataverse.harvard.edu
    • search.datacite.org
    Updated May 7, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Eelco van der Maat (2020). Replication Data for: "Genocidal Consolidation: Final Solutions to Elite rivalry" [Dataset]. http://doi.org/10.7910/DVN/VJTPJK
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 7, 2020
    Dataset provided by
    Harvard Dataverse
    Authors
    Eelco van der Maat
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Replication files “Genocidal Consolidation: Final Solutions to Elite rivalry” E van der Maat 16-12-19 The paper contains three analyses. Each analysis has its own replication folder. First analysis (genocidal consolidation onset) — IO_GC_replication_I The first analysis has 3 .R files, one Stata .do file and three data files. To replicate the models of the first analysis, first run: 1) functions.R 2) two-stage probit.R Then run the main latent.R file to replicate the models in the paper; it contains: • replication of Table 4 in the paper • replication of Tables A.4 and A.10 of Appendix C and G Next, run the analysisI.do file; it contains: • replication of the crosstabs in Table 3 (p 30) • replication of the effect estimates (p31) • replication of Table A.2 of Appendix C Second analysis (elite purges) — IO_GC_replication_II The second analysis folder has a single stata .do file and six data files. To replicate the models, run the analysisII.do file; it contains: • replication of Crosstab Figure 3 (p 35) • replication of Table 5 • replication of Table A.5 and A.6 of Appendix D • replication of Table A.11 of Appendix G Third analysis (leader fates) — IO_GC_replication_III The third analysis folder contains a single stata .do file, two data files, and a log file. To replicate the models, run the analysisIII.do file; it contains: • replication of man analysis (Figure 4: leader fates; p 41) • replication of Rosenbaum sensitivity analysis (footnote 136 & 137) • replication of balance checks for various specifications Table A.7 of Appendix E • replication of leader propensity scores and matches (Tables A.8 and A.15—A.25) • replication of alternative specifications (Table A.9 of appendix E) • replication of balance checks for various HI specifications (Table A.12 of Appendix G) • replication of alternative specification with HI (Table A.14 of appendix G) Note that this file may take a very long time to run because of a total 150,000 bootstraps. I’ve added a log for easy reference of outcomes. To check replication results it’s probably easiest to make sure the code works on your machine; then run the file with a log; and check the log when it’s finished running. Good luck!

  16. d

    Data from: ECOSTRESS Geolocation Daily L1B Global 70m V001

    • catalog.data.gov
    • s.cnmilf.com
    • +1more
    Updated Jul 3, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    LP DAAC;NASA/JPL/ECOSTRESS (2025). ECOSTRESS Geolocation Daily L1B Global 70m V001 [Dataset]. https://catalog.data.gov/dataset/ecostress-geolocation-daily-l1b-global-70m-v001-6acb0
    Explore at:
    Dataset updated
    Jul 3, 2025
    Dataset provided by
    LP DAAC;NASA/JPL/ECOSTRESS
    Description

    Forward processing of ECOSTRESS Version 1 data products was discontinued on January 6, 2025. Users are encouraged to transition to the ECOSTRESS Version 2 data products.The ECOsystem Spaceborne Thermal Radiometer Experiment on Space Station (ECOSTRESS) mission measures the temperature of plants to better understand how much water plants need and how they respond to stress. ECOSTRESS is attached to the International Space Station (ISS) and collects data globally between 52 degrees N and 52 degrees S latitudes. The ECO1BGEO Version 1 data product provides the geolocation information for the radiance values retrieved in the ECO1BRAD Version 1 data product. The ECO1BGEO data product should be used to georeference the ECO1BRAD, ECO2CLD, ECO2LSTE, ECO3ANCQA, ECO3ETPTJPL, ECO4ESIPTJPL, and ECO4WUE data products. The geolocation processing corrects the ISS-reported ephemeris and attitude data by image matching with a global ortho-base derived from Landsat data and then assigns latitude and longitude values to each of the Level 1 radiance pixels. When image matching is successful, the data are geolocated to better than 50 meter (m) accuracy. The ECO1BGEO data product is provided as swath data.The ECO1BGEO data product contains data variables for latitude and longitude values, solar and view geometry information, surface height, and the fraction of pixel on land versus water.Known Issues Geolocation accuracy: In cases where scenes were not successfully matched with the ortho-base, the geolocation error is significantly larger, with the worst-case geolocation error for uncorrected data being at 7 kilometers (km). Within the metadata of the ECO1BGEO file, if the field "L1GEOMetadata/OrbitCorrectionPerformed" is "True," the data was corrected, and geolocation accuracy should be better than 50 m. If this is "False," then the data was processed without correcting the geolocation and will have up to 7 km geolocation error. Data acquisition gap: ECOSTRESS was launched on June 29, 2018, and moved to autonomous science operations on August 20, 2018, following a successful in-orbit checkout period. On September 29, 2018, ECOSTRESS experienced an anomaly with its primary mass storage unit (MSU). ECOSTRESS has a primary and secondary MSU (A and B). On December 5, 2018, the instrument was switched to the secondary MSU and science operations resumed. On March 14, 2019, the secondary MSU experienced a similar anomaly temporarily halting science acquisitions. On May 15, 2019, a new data acquisition approach was implemented and science acquisitions resumed. To optimize the new acquisition approach TIR bands 2, 4 and 5 are being downloaded. The data products are as previously, except the bands not downloaded contain fill values (L1 radiance and L2 emissivity). This approach was implemented from May 15, 2019, through April 28, 2023. Data acquisition gap: From February 8 to February 16, 2020, an ECOSTRESS instrument issue resulted in a data anomaly that created striping in band 4 (10.5 micron). These data products have been reprocessed and are available for download. No ECOSTRESS data were acquired on February 17, 2020, due to the instrument being in SAFEHOLD. Data acquired following the anomaly have not been affected. Data acquisition: ECOSTRESS has now successfully returned to 5-band mode after being in 3-band mode since 2019. This feature was successfully enabled following a Data Processing Unit firmware update (version 4.1) to the payload on April 28, 2023. To better balance contiguous science data scene variables, 3-band collection is currently being interleaved with 5-band acquisitions over the orbital day/night periods.

  17. DataForSEO Labs API for keyword research and search analytics, real-time...

    • datarade.ai
    .json
    Updated Jun 4, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    DataForSEO (2021). DataForSEO Labs API for keyword research and search analytics, real-time data for all Google locations and languages [Dataset]. https://datarade.ai/data-products/dataforseo-labs-api-for-keyword-research-and-search-analytics-dataforseo
    Explore at:
    .jsonAvailable download formats
    Dataset updated
    Jun 4, 2021
    Dataset provided by
    Authors
    DataForSEO
    Area covered
    Cocos (Keeling) Islands, Kenya, Armenia, Morocco, Tokelau, Azerbaijan, Isle of Man, Mauritania, Micronesia (Federated States of), Korea (Democratic People's Republic of)
    Description

    DataForSEO Labs API offers three powerful keyword research algorithms and historical keyword data:

    • Related Keywords from the “searches related to” element of Google SERP. • Keyword Suggestions that match the specified seed keyword with additional words before, after, or within the seed key phrase. • Keyword Ideas that fall into the same category as specified seed keywords. • Historical Search Volume with current cost-per-click, and competition values.

    Based on in-market categories of Google Ads, you can get keyword ideas from the relevant Categories For Domain and discover relevant Keywords For Categories. You can also obtain Top Google Searches with AdWords and Bing Ads metrics, product categories, and Google SERP data.

    You will find well-rounded ways to scout the competitors:

    • Domain Whois Overview with ranking and traffic info from organic and paid search. • Ranked Keywords that any domain or URL has positions for in SERP. • SERP Competitors and the rankings they hold for the keywords you specify. • Competitors Domain with a full overview of its rankings and traffic from organic and paid search. • Domain Intersection keywords for which both specified domains rank within the same SERPs. • Subdomains for the target domain you specify along with the ranking distribution across organic and paid search. • Relevant Pages of the specified domain with rankings and traffic data. • Domain Rank Overview with ranking and traffic data from organic and paid search. • Historical Rank Overview with historical data on rankings and traffic of the specified domain from organic and paid search. • Page Intersection keywords for which the specified pages rank within the same SERP.

    All DataForSEO Labs API endpoints function in the Live mode. This means you will be provided with the results in response right after sending the necessary parameters with a POST request.

    The limit is 2000 API calls per minute, however, you can contact our support team if your project requires higher rates.

    We offer well-rounded API documentation, GUI for API usage control, comprehensive client libraries for different programming languages, free sandbox API testing, ad hoc integration, and deployment support.

    We have a pay-as-you-go pricing model. You simply add funds to your account and use them to get data. The account balance doesn't expire.

  18. C

    Customer Intelligence Software Report

    • archivemarketresearch.com
    doc, pdf, ppt
    Updated Mar 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Archive Market Research (2025). Customer Intelligence Software Report [Dataset]. https://www.archivemarketresearch.com/reports/customer-intelligence-software-52983
    Explore at:
    ppt, pdf, docAvailable download formats
    Dataset updated
    Mar 7, 2025
    Dataset authored and provided by
    Archive Market Research
    License

    https://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The Customer Intelligence (CI) software market is experiencing robust growth, projected to reach $882.8 million in 2025 and exhibiting a Compound Annual Growth Rate (CAGR) of 7.0% from 2025 to 2033. This expansion is driven by the increasing need for businesses of all sizes to understand and leverage customer data for improved decision-making, enhanced customer experience, and ultimately, increased profitability. Key drivers include the rising adoption of cloud-based solutions, the proliferation of big data analytics, and the growing importance of personalized marketing strategies. Businesses are increasingly recognizing the value of consolidating customer data from disparate sources to gain a holistic view of their customers, enabling them to tailor offerings, improve customer service, and optimize marketing campaigns. The market is segmented by type (Customer Experience, Customer Data, Customer Feedback, and Other) and application (Large Enterprises and SMEs), with large enterprises currently dominating the market due to their greater resources and more complex data needs. However, the SME segment is expected to witness significant growth fueled by the availability of more affordable and user-friendly CI software solutions. Geographic distribution shows strong market presence in North America and Europe, although Asia-Pacific is anticipated to emerge as a significant growth region in the coming years, driven by the rapid digitalization and increasing adoption of advanced analytics in emerging economies. The competitive landscape is marked by a mix of established players like Oracle, IBM, and SAS, alongside agile startups and specialized providers like NGDATA and Zeotap. This competitive environment fosters innovation and drives the development of increasingly sophisticated CI software solutions. The continued focus on data privacy and security regulations presents a key challenge for the industry, as businesses strive to balance the benefits of data-driven insights with ethical and compliance considerations. The future trajectory of the CI software market is likely to be shaped by advancements in artificial intelligence (AI), machine learning (ML), and natural language processing (NLP), further enhancing the capabilities of these solutions to extract meaningful insights from vast amounts of customer data. Integration with other business intelligence and CRM systems will also be crucial in maximizing the value of CI software for businesses.

  19. U

    Water Balance Model Inputs and Outputs for the Conterminous United States,...

    • data.usgs.gov
    • datadiscoverystudio.org
    • +4more
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    David Wolock; Gregory McCabe, Water Balance Model Inputs and Outputs for the Conterminous United States, 1900-2015 [Dataset]. http://doi.org/10.5066/F71V5CWN
    Explore at:
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Authors
    David Wolock; Gregory McCabe
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Area covered
    Contiguous United States, United States
    Description

    This metadata record describes monthly input and output data covering the period 1900-2015 for a water-balance model described in McCabe and Wolock (2011). The input datasets are precipitation and air temperature from the PRISM group at Oregon State University. The model outputs include estimated potential evapotranspiration (PET), actual evapotranspiration (AET), runoff (RUN) (streamflow per unit area), soil moisture storage (STO), and snowfall (SNO). The datasets are arranged in tables of monthly total or average values measured in millimeters or degrees C and then multiplied by 100. The data are indexed by the identifier PRISMID, which refers to an ASCII raster of cells in an associated file named PRISMID.asc. Water-balance model inputs and outputs also can be linked to a file (PRISMID_LL.csv) of latitude and longitude values in a separate comma separated data file based on PRISMID values.
    Each input and output variable comma separated file contains 10 years of monthly data ...

  20. H

    Data from: LamaH-Ice: LArge-SaMple DAta for Hydrology and Environmental...

    • hydroshare.org
    • beta.hydroshare.org
    • +2more
    zip
    Updated Jun 21, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hordur Bragi Helgason; Bart Nijssen (2024). LamaH-Ice: LArge-SaMple DAta for Hydrology and Environmental Sciences for Iceland [Dataset]. http://doi.org/10.4211/hs.86117a5f36cc4b7c90a5d54e18161c91
    Explore at:
    zip(9.3 GB)Available download formats
    Dataset updated
    Jun 21, 2024
    Dataset provided by
    HydroShare
    Authors
    Hordur Bragi Helgason; Bart Nijssen
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jan 1, 1950 - Dec 31, 2021
    Area covered
    Description

    LamaH-Ice (LArge-SaMple DAta for Hydrology and Environmental Sciences for Iceland) is a large-sample hydro-meteorological dataset for Iceland. The dataset includes daily and hourly hydro-meteorological timeseries, including observed streamflow, and catchment characteristics for 107 river basins in Iceland. The catchment characteristics describe the topographic, hydroclimatic, land cover, vegetation, soils, geological and glaciological attributes of the river catchments, as well as the human influence on streamflow in the catchments. LamaH-Ice conforms to the structure of existing large-sample hydrology datasets and includes most variables offered in these datasets, as well as additional information relevant to cold-region hydrology, e.g., timeseries of snow cover, glacier mass balance and albedo. A large majority of the watersheds in LamaH-Ice are not subject to human activities, such as diversions and flow regulations. The dataset is described in detail in a paper in the journal "Earth System Science Data" (ESSD - https://doi.org/10.5194/essd-2023-349). The code used to assemble the dataset is available in folder "F_appendix" in the dataset as well as on GitHub (https://github.com/hhelgason/LamaH-Ice).

    We offer two downloadable files for the LamaH-Ice dataset: 1) Hydrometeorological time series with both daily and hourly resolutions (30 GB after decompression) and 2) Hydrometeorological time series with daily resolution only (2 GB). Other than the temporal resolution, there are no differences between the two downloadable files. This HydroShare resource also hosts the "LamaH-Ice Caravan extension" (1 GB), which complements the "Caravan - A global community dataset for large-sample hydrology" Caravan dataset (Kratzert et al., 2023). The data is formatted in the same manner as the data currently existing in Caravan. To process the Caravan extension, the following guide was used: https://github.com/kratzert/Caravan/wiki/Extending-Caravan-with-new-basins. Some features, e.g. hourly atmospheric and streamflow series, glacier mass balance and MODIS timeseries data are thus only available in the LamaH-Ice dataset.

    Data disclaimer: The Icelandic Meteorological Office (IMO) and the National Power Company of Iceland (NPC) own the data from most streamflow gauges in the dataset. The streamflow data is published on Hydroshare with permission of all data owners. Neither we nor the provider of the streamflow dataset can be liable for the data provided. The IMO and the NPC reserve the rights to retrospectively check and update the streamflow timeseries at any time, and these changes will not be reflected in this published dataset. If up-to-date data is needed, users are encouraged to contact the IMO and the NPC.

    License: The streamflow data is subject to the CC BY-NC 4.0 (creativecommons.org/licenses/by-nc/4.0/). The streamflow data cannot be used for commercial purposes. All data except for the streamflow measurements are subject to the CC BY 4.0 (http://creativecommons.org/licenses/by/4.0/). Users can share and adapt the dataset only if appropriate credit is given (the ESSD data description paper is cited, the version of the dataset and all data sources are listed which are declared in the folder "Info") and any changes are clearly indicated, and a link to the original license is provided.

    Updates since the HydroShare repository was first created on August 18, 2023: May 31, 2024: • Streamflow series were corrected (replaced) for gauges with IDs 31, 70 and 72, and hydrological signatures and water balance files recalculated using the corrected streamflow series. In the Caravan extension, gauges 43 and 51 were also corrected. March 12, 2024: Dataset Revision: In line with the ESSD manuscript revision, significant updates have been made. For a detailed list, visit https://doi.org/10.5194/essd-2023-349-AC1. Key changes include: • A timeseries for reference ET has been computed using RAV-II reanalysis meteorological timeseries. • Climate indices recalculated with RAV-II reanalysis; ERA5-Land indices remain under an "_ERA5L" suffix. • Hydrological signatures are now derived from RAV-II reanalysis precipitation. • Standardized .csv column separators to semicolons. • Enhanced metadata for all shapefiles. • Added attributes (g_lon, g_lat, g_frac_dyn, g_area_dyn) to the dataset. • Reordered catchment attributes table columns for consistency with the LamaH-Ice paper. • Corrected ERA5-Land reanalysis errors for shortwave and longwave flux timeseries. • Streamflow series were corrected for gauges with IDs 43 and 51 (in LamaH, not Caravan extension) February 22, 2024 • Caravan Extension Fix: Corrected latitude and longitude mix-up. October 1, 2023 • GeoPackages added as an alternative to shapefiles, readme files added in all subfolders for guidance.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Pickup, Mark; Kellstedt, Paul (2023). Replication Data for: Balance as a Pre-Estimation Test for Time Series Analysis [Dataset]. http://doi.org/10.7910/DVN/G0XXSE

Replication Data for: Balance as a Pre-Estimation Test for Time Series Analysis

Related Article
Explore at:
Dataset updated
Nov 13, 2023
Dataset provided by
Harvard Dataverse
Authors
Pickup, Mark; Kellstedt, Paul
Description

It is understood that ensuring equation balance is a necessary condition for a valid model of times series data. Yet, the definition of balance provided so far has been incomplete and there has not been a consistent understanding of exactly why balance is important or how it can be applied. The discussion to date has focused on the estimates produced by the GECM. In this paper, we go beyond the GECM and be- yond model estimates. We treat equation balance as a theoretical matter, not merely an empirical one, and describe how to use the concept of balance to test theoretical propositions before longitudinal data have been gathered. We explain how equation balance can be used to check if your theoretical or empirical model is either wrong or incomplete in a way that will prevent a meaningful interpretation of the model. We also raise the issue of “I(0) balance” and its importance. The replication dataset includes the Stata .do file and .dta file to replicate the analysis in section 4.1 of the Supplementary Information.

Search
Clear search
Close search
Google apps
Main menu