Facebook
TwitterThis paper proposes a Bayesian alternative to the synthetic control method for comparative case studies with a single or multiple treated units. We adopt a Bayesian posterior predictive approach to Rubin's causal model, which allows researchers to make inferences about both individual and average treatment effects on treated observations based on the empirical posterior distributions of their counterfactuals. The prediction model we develop is a dynamic multilevel model with a latent factor term to correct biases induced by unit-specific time trends. It also considers heterogeneous and dynamic relationships between covariates and the outcome, thus improving precision of the causal estimates. To reduce model dependency, we adopt a Bayesian shrinkage method for model searching and factor selection. Monte Carlo exercises demonstrate that our method produces more precise causal estimates than existing approaches and achieves correct frequentist coverage rates even when the sample size is relatively small and rich heterogeneities are present in the data. We illustrate the method with two empirical examples from political economy.
Facebook
Twitterhttps://www.globaldata.com/privacy-policy/https://www.globaldata.com/privacy-policy/
With many products having a high calorie and sugar content, the soft drinks category had a gap in the market for a naturally sweetened and low-calorie alternative to more traditional offerings. In some cases, an innovative start-up needs the support of a more established company to take advantage of such an opportunity. Read More
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Number of missing data points with actual value within 95% and 80% CrIs, out of the total number of missing data points.
Facebook
TwitterThe purpose of this case study is to expand knowledge about outstanding Alternative Transportation System projects in parks and public lands. The Santa Ana National Wildlife Refuge visitor tram service is a partnership between the FWS and the Valley Nature Center, a local, non-profit organization. The Refuge's tram system is successful in large part due to its partnership with a local non-profit organization. Various funding mechanisms and partnering arrangements over the years have contributed to the development of this system.
Facebook
TwitterThe 4 participants of the study have fictional names who appear in the first column of the Excel table ("ID"): Eduardo, Romain, Salim and Shahad. The second column of the file indicates the measured outcome (the dependent variable). The two dependent variables in this study were (a) letter-sound-correspondence knowledge of a series of letters (a/A, r/R, u/U, i/I, l/L, f/F, and é/É) and (b) phonemic awareness (i.e., first-phoneme identification). The third column ("Phase Name") indicates the phase name: baseline phase (before the introduction of the intervention), intervention phase, and maintenance (measures administered at least two weeks after the end of the intervention). The fourth column (“Session”) indicates the session (across all participants) at which the probe was collected. Sessions are standardized across participants, meaning they generically represent “dates” at which probes were collected across students, and not the actual number of probe sessions administered to every student (for example, baseline probes were collected at session 7 and 12 for Salim, but not in between). The fifth column ("Outcome value") indicates the percentage of independent correct responses provided by students on probe measures.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
A partial re-upload of the dataset found on Zenodo. This is an alternative to Mobius upload as it contains some data files from the other half of the time the study was conducted in.
The data dictionary PDF for reading the CSVs can be found on Fitabase's site. The original dataset was used for study doi: 10.2196/resprot.6513
Facebook
TwitterData 1 - R&N TNT fileTNT file used in analysis of original data of R&NData 1.tntData 2 - BPCA PC scoresPrincipal component scores found using BPCAData 2.csvData 3 - Iterative Imputation PC scoresPrincipal component scores identified using iterative imputationData 3.csvData 4 - BPCA loadingsLoadings on each morphometric character identified using BPCAData 4.csvData 5 - Iterative Imputation loadingsLoadings on each morphometric character identified by iterative imputationData 5.csvData 6 - BPCA TNT fileTNT file used in analysis of PC characters identified using BPCAData 6.tntData 7 - Iterative Imputation TNT fileTNT file used in the analysis of the PC characters identified using Iterative ImputationData 7.tntData 8 - PC character loadingsTop 25 most heavily loaded characters on the top three principal componentsData 8.docxData 8The top 25 most highly loaded morphometric characters for the first three principal components, identified using both methods of treating missing data. The...
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset used for the publication "Di Felice, Louisa Jane, Maddalena Ripa, and Mario Giampietro. "An alternative to market-oriented energy models: Nexus patterns across hierarchical levels." Energy Policy 126 (2019): 431-443.". The dataset follows the distinction across hierarchical levels as specified in the publication.
The same dataset was also used for a case study developed for the MAGIC project, available here.
Facebook
Twitterhttps://www.icpsr.umich.edu/web/ICPSR/studies/9965/termshttps://www.icpsr.umich.edu/web/ICPSR/studies/9965/terms
This data collection investigates the effectiveness of alternative approaches to reducing delays in criminal appeals. Interviews were conducted with court representatives from districts employing differing alternatives. These districts and approaches are (1) case management in the Illinois Appellate Court, Fourth District, in Springfield, (2) staff screening for submission without oral argument in the California Court of Appeals, Third District, in Sacramento, and (3) fast-tracking procedures in the Rhode Island Supreme Court. Parallel interviews were conducted in public defenders' offices in three additional locations: Colorado, the District of Columbia, and Minnesota. Questions focused on the backlogs courts were facing, the reasons for the backlogs, and the consequences. Participants were asked about the fairness and possible consequences of procedures employed by their courts and other courts in this study. Case data were acquired from court records of the Springfield, Sacramento, and Rhode Island courts.
Facebook
Twitterhttps://dataverse.harvard.edu/api/datasets/:persistentId/versions/2.1/customlicense?persistentId=doi:10.7910/DVN/ZB9ENWhttps://dataverse.harvard.edu/api/datasets/:persistentId/versions/2.1/customlicense?persistentId=doi:10.7910/DVN/ZB9ENW
This paper explores patterns of financial transactions at the individual level in order to establish the effects of mobile money’s usage in a variety of country case examples. Data from the Financial Inclusion Insights program was analyzed for Bangladesh, India, Kenya, Nigeria, Pakistan, Tanzania, and Uganda, to establish differences between individuals who use mobile money services and their non-user counterparts. This analysis builds on previous research into the household level effects of the widely popular M-PESA services in Kenya to see if financial transaction patterns can be replicated in other country data. Contrary to previous literature, m-money usership was not a consistent predictor of transaction frequency and transaction distance for the country cases where data was available. To examine m-money’s potential as a complement or substitute to formal banking, usage frequency of bank account services was regressed on m-money usership, which was interacted with personal bank account ownership. Findings suggest that m-money encouraged bank account usage in the country samples where m-money was less prevalent overall, and discouraged bank account usage in the country samples where it was more prevalent. Overall, this study finds considerable difference in the effects of mobile money by country, as well as discrepant effects when interacted with bank account ownership.
Facebook
TwitterWoelfer_&_Nyakatura_Ecol_and_Evol_2019-R project filesThis folder contains two R projects (Femur and Scapula, respectively) and associated data, such as surface models for the femora and photos for the scapula. See README.txt for further information.Woelfer_&_Nyakatura_Ecol_and_Evol_2019.7z
Facebook
Twitterhttps://www.icpsr.umich.edu/web/ICPSR/studies/28024/termshttps://www.icpsr.umich.edu/web/ICPSR/studies/28024/terms
This data collection was created to study agenda-setting and alternative specification in the federal government. It concentrates on two federal policy areas, health and transportation, but the theories generated in the research may be quite widely applicable beyond those two areas. The aim of the work was not to study how issues are decided in some authoritative process like a congressional vote, but instead to study how issues get to be issues in the first place, how items rise and fall on the governmental agenda, and how the alternatives from which choices are made are generated. The results of the study were published in John W. Kingdon, Agendas, Alternatives, and Public Policies (First Edition, Little Brown, 1984; Second Edition, HarperCollins, 1995; Longman Classics in Political Science Edition, Longman, 2003; Updated Second Edition, with Epilogue on Health Care Reform, Longman, 2011). The study's methods are described in detail in the Appendix to that book, and are included as part of the documentation for this data collection. The major data source is a set of interviews that John Kingdon conducted in four waves (the summers of 1976, 1977, 1978, and 1979), with well-informed respondents either in the federal government (both congressional and executive) or involved in health or transportation policy around the federal government (e.g., lobbyists, journalists, academics, consultants). "Elite and specialized" interviews, to use Lewis Dexter's terminology (see Elite and Specialized Interviewing, Northwestern University Press, 1969), are conducted differently than standard survey research interviewing. The idea is to have a two-way conversation with a well-informed and highly involved respondent, rather than strict question and response. As such, the list of questions used was not a hard-and-fast interview schedule or questionnaire, but a kind of guide. The questions were not always asked in the same order, and indeed, not all of the questions were always asked. Question wording may have varied slightly from one interview to another. Various ad hoc probes were inserted as they seemed appropriate. Sometimes in this sort of interview, the interviewer makes a statement rather than asking a question. Still, the central questions were usually asked in roughly the same wording. Thus, when the interview write-up says "Q1," that is the first question in the standard list of questions used. Interviews were not taped or otherwise recorded verbatim, since the principal investigator firmly believed that, with these sorts of respondents, taping dampened their ability and willingness to be candid. The principal investigator did not want respondents to feel that they were on the record, as respondents were accustomed to dealing with reporters, and when a microphone was in their face, they knew the encounter would be on the record. Notes were taken during the interview, and then written up immediately after; hence, the typescripts of the interviews are labeled "write-up" instead of "transcript." All 247 write-ups have a respondent identification number and the date of the interview on the top of the first page. The principal investigator also coded the interview write-ups into quantitative data files, despite the nonrandom selection of respondents and the fluid conduct of the interviews. He did this to support quantitative judgments (e.g., "this issue was mentioned frequently in 1978 and not frequently in 1979," or "this factor was hardly ever mentioned in the interviews"). Each interview was coded by two coders, and then their judgments were combined. In addition to generic identifying information, there are two general categories of variables. One category, referred to as "global codes" in the codebook, is composed of ratings of the importance of each of several actors (e.g., mass media, president himself, interest groups, congressional staffers). The other category, referred to as "problem codes" is a coding of the problems that respondents discussed in their interviews, and is divided into health and transportation. A full description of coding procedures is contained in the data collection documentation. Interview data are supplemented by a series of 23 case studies in health and transportation, and by some attention to other sources of data like congressional hearing records and public opinion data. In addition to
Facebook
TwitterAbstractModern home-range estimation typically relies on data derived from expensive radio- or GPS-tracking. Although trapping represents a low-cost alternative to telemetry, there lacks an evaluation of the performance of home-range estimators on trap-derived data. Using simulated data, we evaluate three variables reflecting the key trade-offs ecologists face when designing a trapping study: 1) the number of observations obtained per individual, 2) the trap density, and 3) the proportion of the home range falling inside the trapping area. We compare the performance of five home-range estimators (MCP, LoCoH, KDE, AKDE, bicubic interpolation). We further explore the potential benefits of combining these estimators with asymptotic models, which leverage the saturating behavior of changes in the estimated home-range area as the number of observations increases to improve accuracy, as well as different data ordering procedures. We then quantified the bias in home-range size under the different scenarios investigated. The number of observations and the proportion of the home range within the trapping grid were the most important predictors of the accuracy and the precision of home-range estimates. The use of asymptotic models helped obtain accurate estimates at smaller sample sizes, while distance-ordering improved the precision and asymptotic consistency of estimates. While AKDE was the best-performing estimator under most conditions evaluated, bicubic interpolation was a viable alternative under common real-world conditions of low trap density and area covered. A case study using empirical data from white-tailed deer in Florida and another from jaguars in Belize demonstrated support for the findings of our simulation results. Although researchers with trap data often overlook home-range estimation, our results indicate that these data have the capacity to yield accurate estimates of home-range size. Trapping data can therefore lower the economic costs of home-range analysis, potentially enlarging the span of species, researchers and questions studied in ecology and conservation., MethodsUsing an I.I.D. movement model, we simulated captures in different trapping conditions and compared the home range size obtained with different methods. We analyzed real-world trapping data for white-tailed deer (Florida) and jaguars (Belize) from publicly available repositories and compared the home range sizes obtained with those predicted based on our simulations. , Usage notesAll data can be opened with R. All R scripts to create and analyze the data can be found at https://github.com/llsociasmartinez/home-range-trapping-data.
Facebook
TwitterThe uploaded file ("OfM_I4A.mat", standing for "Output from Model - Input for Appraisal") is a Saved Workspace (in Matlab environment) containing as variables all the results and intermediate variables (along with the initial input data values) obtained from the computation of a transport model specifically developed for the forecasting of the effects that would be generated by the hypothetical introduction ― into a corridor, axis or route with given characteristics ― of each one of the two main types of urban and metropolitan middle-capacity transit systems. Such effects are quantitatively reflected in the model by a set of variables indicative, among many others, of trip volumes and travel conditions. For this analysis, the two main classes of middle-capacity transit systems are the light rail modes (LRT and/or modern tramway) and the bus semirapid transit (BST) systems with exclusive right-of-way, commonly termed also as BHLS (Bus with High Level of Service) or BRT (Bus Rapid Transit).
The results contained in the file "OfM_I4A.mat" come from applying the model to a hypothetical case study that is based on a set of artificial data specially designed to be illustrative of fairly usual conditions in corridors, axes or routes with intermediate volumes of public transit demand. The code developed to compute de model, including the complete set of data used for this artificial case study, is also available through https://doi.org/10.5281/zenodo.10500901
The values included in the uploaded file might be taken as input dataset for the subsequent application of a quantitative assessment method, such as Cost-Benefit Analysis, in order to evaluate and finally select one of those two alternative medium-capacity transit sistems (Light Rail or Bus Semirapid Transit) in this case study or in others as much as they be similar enough.
Facebook
TwitterThese data are part of NACJD's Fast Track Release and are distributed as they were received from the data depositor. The files have been zipped by NACJD for release, but not checked or processed except for the removal of direct identifiers. Users should refer to the accompanying readme file for a brief description of the files available with this collection and consult the investigator(s) if further information is needed. This study addresses changes to state correctional systems and policies in response to correctional spending limits brought on by the worsening economic climate beginning in late 2007. These changes include institutional changes, such as closing prisons and reducing staffing, "back-end" strategies, such as reductions in sentence lengths and reduced parolee supervision, and "front-end" measures, such as funding trade-offs between other governmental and social services. A survey of the 50 state correctional administrators addressed fiscal stress, including size and characteristics of the prison population, prison crowding, prison expenditures, institutional safety, staff morale, public safety and other justice spending. Additionally, six states were selected for in depth case studies, which included interviews with facility personnel and site visits by research staff in order to thoroughly understand the challenges faced and the resulting decisions made. Additionally, each state's demographic, correctional spending, and overall financial information was collected from census and other publicly available reports. Information on the overall health and safety of the inmates was examined through an econometric comparison of funding levels and statistics as to prisoner mortality, crime and incarceration rates.
Facebook
TwitterThis README file provides detailed documentation for the dataset Deep Reinforcement Learning for Pressure Optimization in Water Distribution Networks with Multiple Pumping Stations: Case Study, ASCE Journal of Water Resources Planning and Management. It describes the folder structure, variable definitions, software environment, and step-by-step instructions required to reproduce the analyses and results. The abstract for this dataset is provided separately on the Dryad record page.
DataSetR02.zip/
│
├── Python Scripts
│ ├── env_001.py → Custom Gym environment for multi-pump network.
│ ├── evaluate_001.py → Evaluates trained SAC agent on validation sets.
│ ├── Evaluation_conventional_001.py → Baseline evaluation using fixed set...
Facebook
TwitterConsumer Edge is a leader in alternative consumer data for public and private investors and corporate clients. CE Vision USA includes consumer transaction data on 100M+ credit and debit cards, including 35M+ with activity in the past 12 months and 14M+ active monthly users. Capturing online, offline, and 3rd-party consumer spending on public and private companies, data covers 12K+ merchants, 800+ parent companies, 80+ same store sales metrics, and deep demographic and geographic breakouts. Review data by ticker in our Investor Relations module. Brick & mortar and ecommerce direct-to-consumer sales are recorded on transaction date and purchase data is available for most companies as early as 6 days post-swipe.
Consumer Edge’s consumer transaction datasets offer insights into industries across consumer and discretionary spend such as: • Apparel, Accessories, & Footwear • Automotive • Beauty • Commercial – Hardlines • Convenience / Drug / Diet • Department Stores • Discount / Club • Education • Electronics / Software • Financial Services • Full-Service Restaurants • Grocery • Ground Transportation • Health Products & Services • Home & Garden • Insurance • Leisure & Recreation • Limited-Service Restaurants • Luxury • Miscellaneous Services • Online Retail – Broadlines • Other Specialty Retail • Pet Products & Services • Sporting Goods, Hobby, Toy & Game • Telecom & Media • Travel
Private equity and venture capital firms can leverage insights from CE’s synthetic data to assess investment opportunities, while consumer insights teams and retailers can gain visibility into transaction data’s potential for competitive analysis, shopper behavior, and market intelligence.
CE Vision Benefits • Discover new competitors • Compare sales, average ticket & transactions across competition • Evaluate demographic and geographic drivers of growth • Assess customer loyalty • Explore granularity by geos • Benchmark market share vs. competition • Analyze business performance with advanced cross-cut queries
Corporate researchers and consumer insights teams use CE Vision for:
Corporate Strategy Use Cases • Ecommerce vs. brick & mortar trends • Real estate opportunities • Economic spending shifts
Marketing & Consumer Insights • Total addressable market view • Competitive threats & opportunities • Cross-shopping trends for new partnerships • Demo and geo growth drivers • Customer loyalty & retention
Investor Relations • Shareholder perspective on brand vs. competition • Real-time market intelligence • M&A opportunities
Most popular use cases for private equity and venture capital firms include: • Deal Sourcing • Live Diligences • Portfolio Monitoring
Use Case: Apparel Retailer, Enterprise-Wide Solution
Problem A $49B global apparel retailer was looking for a comprehensive enterprise-wide consumer data platform to manage and track consumer behavior across a variety of KPI's for use in weekly and monthly management reporting.
Solution The retailer leveraged Consumer Edge's Vision Pro platform to monitor and report weekly on: • market share, competitive analysis and new entrants • trends by geography and demographics • online and offline spending • cross-shopping trends
Impact Marketing and Consumer Insights were able to: • develop weekly reporting KPI's on market share for company-wide reporting • establish new partnerships based on cross shopping trends online and offline • reduce investment in slow channels in both online and offline channels • determine demo and geo drivers of growth for refined targeting • analyze customer retention and plan campaigns accordingly
Facebook
TwitterWe created planning-level cost estimating tools to assist with projects needing to consider the dam removal alternative: (1) new databases of case studies (Duda et al. 2023a; Tullos and Bountry 2023); (2) scoping questions to help determine if complexity cost drivers will be present; (3) machine learning based regression trees to estimate a potential cost range; and (4) a Computation Guide for Cost Estimating that can be used to inform discussions on potential dam removal cost items, quantities, and unit costs (Appendix A). Using the collected data and knowing some basic characteristics about the average annual flow and geographic location of the dam site, in addition to dam size, can improve the ability to use past case studies for planning-level cost estimating. By additionally incorporating scoping questions to assess likelihood of complexity cost drivers, the initial uncertainty of a cost estimate can be further reduced especially for small dams. Applying the Computation Guide for Cost Estimating requires more robust information but helps users reduce cost uncertainty. This step further refines the dam removal objective, removal approach (partial or full; phased or instantaneous), engineering design, construction means and methods, quantities, and unit costs, and results in a quantitative cost estimate.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Description of contacts screened through household contact investigations.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset provides a set of sustainability principles, criteria and indicators for the evaluation of the conversion routes stage of a bio-based product. The selected case studies on the employment of alternative feedstocks and production of the bio-based products are implemented in order to evaluate the proposed methodology. Mass and energy balances for all case studies, estimated techno-economic metrics, cost of externalities and risk assessment results are provided
Facebook
TwitterThis paper proposes a Bayesian alternative to the synthetic control method for comparative case studies with a single or multiple treated units. We adopt a Bayesian posterior predictive approach to Rubin's causal model, which allows researchers to make inferences about both individual and average treatment effects on treated observations based on the empirical posterior distributions of their counterfactuals. The prediction model we develop is a dynamic multilevel model with a latent factor term to correct biases induced by unit-specific time trends. It also considers heterogeneous and dynamic relationships between covariates and the outcome, thus improving precision of the causal estimates. To reduce model dependency, we adopt a Bayesian shrinkage method for model searching and factor selection. Monte Carlo exercises demonstrate that our method produces more precise causal estimates than existing approaches and achieves correct frequentist coverage rates even when the sample size is relatively small and rich heterogeneities are present in the data. We illustrate the method with two empirical examples from political economy.