Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is replication code and data for the paper "The Long and Short (Run) Of Trade Elasticities."Abstract: We propose a novel approach to estimate the trade elasticity at various horizons. When countries change Most Favored Nation (MFN) tariffs, partners that trade on MFN terms experience plausibly exogenous tariff changes. The differential effects on imports from these countries relative to a control group – countries not subject to the MFN tariff scheme – can be used to identify the trade elasticity. We build a panel dataset combining information on product-level tariffs and trade flows covering 1995-2018, and estimate the trade elasticity at short and long horizons usinglocal projections (Jordà, 2005). Our main findings are that the elasticity of tariff-exclusive trade flows in the year following the exogenous tariff change is about −0.76, and the long-run elasticity ranges from −1.75 to −2.25. Our long-run estimates are smaller than typical in the literature, and it takes 7-10 years to converge to the long run, implying that (i) the welfare gains from trade are high and (ii) there are substantial convexities in the costs of adjusting export participation.
The Massachusetts Office of Coastal Zone Management launched the Shoreline Change Project in 1989 to identify erosion-prone areas of the coast. The shoreline position and change rate are used to inform management decisions regarding the erosion of coastal resources. In 2001, a shoreline from 1994 was added to calculate both long- and short-term shoreline change rates along ocean-facing sections of the Massachusetts coast. In 2013, two oceanfront shorelines for Massachusetts were added using 2008-9 color aerial orthoimagery and 2007 topographic lidar datasets obtained from the National Oceanic and Atmospheric Administration's Ocean Service, Coastal Services Center. This 2018 data release includes rates that incorporate two new mean high water (MHW) shorelines for the Massachusetts coast extracted from lidar data collected between 2010 and 2014. The first new shoreline for the State includes data from 2010 along the North Shore and South Coast from lidar data collected by the U.S. Army Corps of Engineers (USACE) Joint Airborne Lidar Bathymetry Technical Center of Expertise. Shorelines along the South Shore and Outer Cape are from 2011 lidar data collected by the U.S. Geological Survey's (USGS) National Geospatial Program Office. Shorelines along Nantucket and Martha’s Vineyard are from a 2012 USACE Post Sandy Topographic lidar survey. The second new shoreline for the North Shore, Boston, South Shore, Cape Cod Bay, Outer Cape, South Cape, Nantucket, Martha’s Vineyard, and the South Coast (around Buzzards Bay to the Rhode Island Border) is from 2013-14 lidar data collected by the (USGS) Coastal and Marine Geology Program. This 2018 update of the rate of shoreline change in Massachusetts includes two types of rates. Some of the rates include a proxy-datum bias correction, this is indicated in the filename with “PDB”. The rates that do not account for this correction have “NB” in their file names. The proxy-datum bias is applied because in some areas a proxy shoreline (like a High Water Line shoreline) has a bias when compared to a datum shoreline (like a Mean High Water shoreline). In areas where it exists, this bias should be accounted for when calculating rates using a mix of proxy and datum shorelines. This issue is explained further in Ruggiero and List (2009) and in the process steps of the metadata associated with the rates. This release includes both long-term (~150 years) and short term (~30 years) rates. Files associated with the long-term rates have “LT” in their names, files associated with short-term rates have “ST” in their names.
https://dataverse.harvard.edu/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.7910/DVN/QCKJYLhttps://dataverse.harvard.edu/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.7910/DVN/QCKJYL
This dataset contains replication files for "The Surrogate Index: Combining Short-Term Proxies to Estimate Long-Term Treatment Effects More Rapidly and Precisely" by Susan Athey, Raj Chetty, Guido Imbens, and Hyunseung Kang. For more information, see https://opportunityinsights.org/paper/the-surrogate-index/. A summary of the related publication follows. The impacts of many policies, such as efforts to increase upward income mobility or improve health outcomes, are only observed with long delays. For example, it can take decades to see the effects of early childhood interventions on lifetime earnings. This problem has greatly limited researchers’ and policymakers’ ability to test and improve policies and arises frequently in our own work at Opportunity Insights on the determinants of economic opportunity. In this study, we develop a new method of estimating the long-term impacts of policies more rapidly and precisely using short-term proxies. We predict long-term outcomes (e.g., lifetime earnings) using short-term outcomes (e.g., earnings in early adulthood or test scores). We then show that the causal effects of policies on this predictive index (which we term a “surrogate index”, following terminology in the statistics literature) can help us learn about their long-term impacts more quickly under certain assumptions that are described in the full paper. We apply our method to analyze the long-term impacts of a job training experiment in California. Using short-term employment rates as surrogates, we show that one could have estimated the program’s impact on mean employment rates over a 9 year horizon within 1.5 years, with a 35% reduction in standard errors. The success of the surrogate index in this job training application suggests that our method could be applied to predict the long-term impacts of other programs as well. Going forward, we hope to build a public library of early indicators (surrogate indices) for social science by harnessing historical experiments along with the large-scale datasets we have built. If you would like to contribute to this effort by reporting a surrogate index that predicts long-term impacts estimated in an experiment, as in the GAIN program, please contact us.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
We aim to estimate the geographic distribution of small-scale and large-scale agriculture across water-scarce and water-abundant regions, their blue and green water consumption, and the water stress (stress from a lack of blue water) and soil fertility stress on their crop. We combined three definitions of small-scale agriculture and used a soil fertility-enhanced crop model to estimate crop production and water consumption.
This dataset contains country-level and grid-level results (55 countries). Crop code is open source and freely available on GitHub (https://github.com/Han-Su22/ACEA) which is also archived in Zenado (DOI: 10.5281/zenodo.10510933) via a Creative Commons Attribution 4.0 International license. All the code, input data, and output data required to reproduce the results in this study will be archived for at least 10 years after publication within the University of Twente, Multidisciplinary Water Management (MWM) group. The MWM group will make the code and data available to anyone upon request.
A detailed method description and analysis can be found in the paper below (please also cite the paper when using the dataset). Feel free to contact the corresponding author Han Su (h.su@utwente.nl) when you have any questions.
Su, H., Foster, T., Hogeboom, R.J., Luna-Gonzalez, D.V., Mialyk, O., Willaarts, B., Wang, Y., Krol, M.S., 2025. Nutrient production, water consumption, and stresses of large-scale versus small-scale agriculture: A global comparative analysis based on a gridded crop model. Global Food Security 45, 100844.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The 2020 Violence Early Warning System (ViEWS) Prediction Competition challenged participants to produce predictive models of violent political conflict at high spatial and temporal resolutions. This paper presents a convolutional long short-term memory (ConvLSTM) recurrent neural network capable of forecasting the log change in battle-related deaths resulting from state-based armed conflict at the PRIO-GRID cell-month level. The ConvLSTM outperforms the benchmark model provided by the ViEWS team and performs comparably to the best models submitted to the competition. In addition to providing a technical description of the ConvLSTM, I evaluate the model's out-of-sample performance and interrogate a selection of interesting model forecasts. I find that the model relies heavily on lagged levels of battle-related fatalities to forecast future decreases in violence. The model struggles to forecast escalations in violence and tends to underpredict the magnitude of escalation while overpredicting the spatial spread of escalation. This dataset contains the files necessary to replicate "High Resolution Conflict Forecasting with Spatial Convolutions and Long Short-Term Memory". The code to do so can be found on Github at https://github.com/benradford/views2020.
The total amount of data created, captured, copied, and consumed globally is forecast to increase rapidly, reaching *** zettabytes in 2024. Over the next five years up to 2028, global data creation is projected to grow to more than *** zettabytes. In 2020, the amount of data created and replicated reached a new high. The growth was higher than previously expected, caused by the increased demand due to the COVID-19 pandemic, as more people worked and learned from home and used home entertainment options more often. Storage capacity also growing Only a small percentage of this newly created data is kept though, as just * percent of the data produced and consumed in 2020 was saved and retained into 2021. In line with the strong growth of the data volume, the installed base of storage capacity is forecast to increase, growing at a compound annual growth rate of **** percent over the forecast period from 2020 to 2025. In 2020, the installed base of storage capacity reached *** zettabytes.
https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Evolutionary ecologists increasingly study reaction norms that are expressed repeatedly within the same individual's lifetime. For example, foragers continuously alter anti-predator vigilance in response to moment-to-moment changes in predation risk. Variation in this form of plasticity occurs both among and within individuals. Among-individual variation in plasticity (individual by environment interaction or I×E) is commonly studied; by contrast, despite increasing interest in its evolution and ecology, within-individual variation in phenotypic plasticity is not. We outline a study design based on repeated measures and a multi-level extension of random regression models that enables quantification of variation in reaction norms at different hierarchical levels (such as among- and within-individuals). The approach enables the calculation of repeatability of reaction norm intercepts (average phenotype) and slopes (level of phenotypic plasticity); these indices are not specific to measurement or scaling and are readily comparable across data sets. The proposed study design also enables calculation of repeatability at different temporal scales (such as short- and long-term repeatability) thereby answering calls for the development of approaches enabling scale-dependent repeatability calculations. We introduce a simulation package in the R statistical language to assess power, imprecision and bias for multi-level random regression that may be utilised for realistic datasets (unequal sample sizes across individuals, missing data, etc). We apply the idea to a worked example to illustrate its utility. We conclude that consideration of multi-level variation in reaction norms deepens our understanding of the hierarchical structuring of labile characters and helps reveal the biology in heterogeneous patterns of within-individual variance that would otherwise remain ‘unexplained’ residual variance.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Description of Dataset This is a study of examples of Russian predicate adjectives in clauses with zero-copula present tense, where the adjective is a short form (SF) or a long form nominative (LF). The data was collected in 2022 from SynTagRus (https://universaldependencies.org/treebanks/ru_syntagrus/index.html), the syntactic subcorpus of the Russian National Corpus (https://ruscorpora.ru/new/). The data merges the results of several searches conducted to extract examples of sentences with long form and short form adjectives in predicate position, as identified by the corpus. The examples were imported to a spreadsheet and annotated manually, based on the syntactic analyses given in the corpus. For present tense sentences with no copula (Река спокойна or Река спокойная), it was necessary to search for an adjective as the top (root) node in the syntactic structure. The syntactic and morphological categories used in the corpus are explained here: https://ruscorpora.ru/page/instruction-syntax/. In order for the R code to run from these files, one needs to set up an R project with the data files in a folder named "data" and the R markdown files in a folder named "scripts". Method: Logistic regression analysis of corpus data carried out in R (R version 4.2.3 (2023-03-15)-- "Shortstop Beagle" Copyright (C) 2023 The R Foundation for Statistical Computing) and documented in an .Rmd file. Publication Abstract The present article presents an empirical investigation of the choice between so-called long (e.g., prostoj ‘simple’) and short forms (e.g., prost ‘simple’) of predicate adjectives in Russian based on data from the syntactic subcorpus of the Russian National Corpus. The data under scrutiny suggest that short forms represent the dominant option for predicate adjectives. It is proposed that long forms are descriptions of thematic participants in sentences with no complement, while short forms may take complements and describe both participants (thematic and rhematic) and situations. Within the “space of competition” where both long and short forms are well attested, it is argued that the choice of form to some extent depends on subject type, gender/number, and frequency. On the methodological level, the approach adopted in the present study may be extended to other cases of competition in morphosyntax. It is suggested that one should first “peel off” contexts where (nearly) categorical rules are at work, before one undertakes a statistical analysis of the “space of competition”.
Dataset consists of 4-day, 14-day, and full life responses of laboratory cultured mayflies (Neocloeon triangulifer) to nickel and zinc exposure. Responses were measured as mortality, body weight, development time, and reproduction. Water quality and analytical chemistry results associated with toxicity data are included. Additional data included are results of experiments assessing proportion of dissolved metal to nominal metal as influenced by the ratio of diatom diet dry mass present per volume of water.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data for the paper "Rainfall-Runoff Prediction at Multiple Timescales with a Single Long Short-Term Memory Network"
GitHub: https://github.com/gauchm/mts-lstm
This dataset contains the hourly NLDAS forcings and USGS streamflow data.
For training with our codebase, we recommend using the combined NetCDF file, but you can also use the csv files (but it will take much longer to load the data).
Related Datasets: https://doi.org/10.5281/zenodo.4071885 contains the models trained with the forcings and streamflow from this dataset.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Overview
The study comprises data of a combined fMRI/EEG experiment. The EEG files contain 63 head channels, ECG, EOG, facial EMG and skin conductance data. A physio file contains respiration and finger-pulse data. In addition, a T1 weighted whole-brain anatomical MR scan, a PD weighted (UTE) scan for electrode localization is provided (defacing was performed using https://github.com/cbinyu/pydeface). Additional data of the participants (T2 weighted images, button press dynamics, hearing threshold, hearing abilities, and personality traits (NEO-FFI, BIS/BAS, SVF, ERQ, MMG) are available on request. The study was conducted at the Combinatorial NeuroImaging (CNI) core facility of the Leibniz Institute for Neurobiology (LIN) Magdeburg and was approved by the ethics committee of the University of Magdeburg, Germany. All participants gave written informed consent. Currently you will only find 5 data-sets that include the multi-dimensional category learning experiment (cf. Wolff & Brechmann, Cerebral Cortex, 2023) because of the copyright policy of OpenNeuro (i.e. CC0). If you are interested in the remaining data-sets, please contact brechmann@lin-magdeburg.de. Collaboration is highly welcome!
Details of the learning task
The auditory category learning experiment comprised 180 trials for which 160 different frequency modulated sounds were presented in pseudo-randomized order with a jittered inter-trial interval of 6, 8, or 10 s plus 19-95 ms in steps of 19 ms in order to ensure a pseudo-random jitter of the sound onset with the onset of the acquisition of an MR volume. Each sound had five different binary features, i.e. duration (short: 400 ms, long 800 ms), direction of the frequency modulation (rising, falling), intensity (soft: 76–81 dB, loud: 86–91 dB), speed of the frequency modulation (slow: 0.25 octaves/s, fast: 0.5 octaves/s), and frequency range (low: 500–831 Hz, high: 1630–2639 Hz with 5 different ranges each). Participants had to learn a target category defined by a combination of the features duration and direction (i.e. long/rising, long/falling, short/rising, or short/falling) by trial and error. In each trial, participants had to indicate via button press whether they thought a sound belonged to the target category (right index finger) or not (right middle finger). They received feedback about the correctness of the response by a prerecorded, female voice in standard German; e.g., "ja" (yes) or "richtig" (right) following correct responses, "nein" (no) or "falsch" (wrong) following incorrect responses. In 90% of the trials the feedback immediately followed the button press, in 10% it was delayed by 1500 ms. If participants failed to respond within 2 seconds after FM tone onset, a timeout feedback ("zu spät", too late) was presented. During the ~27 min learning experiment, participants were asked to fixate a white cross on grey background and avoid any movements. For the 10 min rs-fMRI, they were asked to close their eyes.
Technical details
MR data were acquired with a 3 Tesla MRI scanner (Philips Achieva dStream) equipped with a 32-channel head coil. The MR scanner generates a trigger signal used to synchronize the multimodal data acquisition. The timing of stimulus events and the participants' responses were controlled by the software Presentation (Neurobehavioral Systems) running on a Windows stimulation-PC. Auditory stimuli were presented via a Mark II+ (MR-Confon, Magdeburg, Germany) audio control unit to MR compatible electrodynamic headphones with integrated ear muffs that provide passive damping of ambient scanner noise by ~24 dB. Earplugs (Bilsom 303) further reduce the noise by ~29 dB (SNR). Button presses of the participants were recorded with the ResponseBox 2.0 by Covilex (Magdeburg, Germany) that includes a response pad with two buttons. The device delivers continuous 8-bit data at a sampling rate of 500 Hz. The Teensy converts left and right button presses that exceed a defined threshold into USB keyboard events handled by the stimulation-PC. Respiration and heart rate was recorded with Invivo MRI Sensors at a sampling rate of 100 Hz and stored on the MRI acquisition PC at 496 Hz sampling rate. 64-channel EEG (including ECG) was recorded at 5 kHz using two 32-channel amplifiers BrainAmp MRplus (Brain Products GmbH, Gilching, Germany). The amplifier's discriminative resolution was set to 0.5 µV/bit (range of +/-16.38 mV) and the signals were hardware-filtered in the frequency band between 0.01 Hz and 250 Hz. A bipolar 16-channel amplifier BrainAmp ExG MR was used to record 2 EOG, 4 EMG (Corrugator, Zygomaticus) channels as well as signals from 4 carbon wire loops (CWL) for correcting pulse and motion related artifacts. Another BrainAmp ExG MR amplifier with an ExG AUX box was used to record the skin conductance (GSR) at the index finger of the participant's non-dominant hand. All signals are synchronized with the MR trigger via a Sync box and two USB2 adapter. All data were recorded and stored with the BrainVision Recorder software. Preprocessing (MR-artifact correction, bandpass filtering between 0.3 and 125 Hz, downsampling to 500 Hz with subsequent CWL correction) and export of the EEG-data was performed in BrainVision Analyzer 2.3. Raw data for optimized artifact correction are available upon request.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
IntroductionIn recent years, numerous AI tools have been employed to equip learners with diverse technical skills such as coding, data analysis, and other competencies related to computational sciences. However, the desired outcomes have not been consistently achieved. This study aims to analyze the perspectives of students and professionals from non-computational fields on the use of generative AI tools, augmented with visualization support, to tackle data analytics projects. The focus is on promoting the development of coding skills and fostering a deep understanding of the solutions generated. Consequently, our research seeks to introduce innovative approaches for incorporating visualization and generative AI tools into educational practices.MethodsThis article examines how learners perform and their perspectives when using traditional tools vs. LLM-based tools to acquire data analytics skills. To explore this, we conducted a case study with a cohort of 59 participants among students and professionals without computational thinking skills. These participants developed a data analytics project in the context of a Data Analytics short session. Our case study focused on examining the participants' performance using traditional programming tools, ChatGPT, and LIDA with GPT as an advanced generative AI tool.ResultsThe results shown the transformative potential of approaches based on integrating advanced generative AI tools like GPT with specialized frameworks such as LIDA. The higher levels of participant preference indicate the superiority of these approaches over traditional development methods. Additionally, our findings suggest that the learning curves for the different approaches vary significantly. Since learners encountered technical difficulties in developing the project and interpreting the results. Our findings suggest that the integration of LIDA with GPT can significantly enhance the learning of advanced skills, especially those related to data analytics. We aim to establish this study as a foundation for the methodical adoption of generative AI tools in educational settings, paving the way for more effective and comprehensive training in these critical areas.DiscussionIt is important to highlight that when using general-purpose generative AI tools such as ChatGPT, users must be aware of the data analytics process and take responsibility for filtering out potential errors or incompleteness in the requirements of a data analytics project. These deficiencies can be mitigated by using more advanced tools specialized in supporting data analytics tasks, such as LIDA with GPT. However, users still need advanced programming knowledge to properly configure this connection via API. There is a significant opportunity for generative AI tools to improve their performance, providing accurate, complete, and convincing results for data analytics projects, thereby increasing user confidence in adopting these technologies. We hope this work underscores the opportunities and needs for integrating advanced LLMs into educational practices, particularly in developing computational thinking skills.
https://dataverse.no/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.18710/ZTQURHhttps://dataverse.no/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.18710/ZTQURH
Dataset description This post provides the data and R scripts for analysis of data on the variation between long form nominative, short form nominative, and instrumental case in Russian predicate adjectives in sentences containing an overt copula verb. We analyze the various factors associated with the choice of form of the adjective. This is the abstract of the article: Based on data from the syntactic subcorpus of the Russian National Corpus, we undertake a quantitative analysis of the competition between Russian predicate adjectives in the instrumental (e.g., pustym ‘empty’), the long form nominative (e.g., pustoj ‘empty’), and the short form nominative (e.g., pust ‘empty’). It is argued that the choice of adjective form is partly determined by the context. Four (nearly) categorical rules are proposed based on the following contextual factors: the form of the copula verb, the presence/absence of a complement, and the nature of the subject of the sentence. At the same time, a “space of competition” is identified, where all three adjective forms are attested. It is hypothesized that within the space of competition, the three forms are recruited to convey different meanings, and it is argued that our analysis lends support to the traditional idea that the short form nominative is closely related to verbs. Our findings are furthermore compatible with the idea that the short form nominative expresses temporary states, rather than inherent permanent characteristics.
Access B2B Contact Data for North American Small Business Owners with Success.ai—your go-to provider for verified, high-quality business datasets. This dataset is tailored for businesses, agencies, and professionals seeking direct access to decision-makers within the small business ecosystem across North America. With over 170 million professional profiles, it’s an unparalleled resource for powering your marketing, sales, and lead generation efforts.
Key Features of the Dataset:
Verified Contact Details
Includes accurate and up-to-date email addresses and phone numbers to ensure you reach your targets reliably.
AI-validated for 99% accuracy, eliminating errors and reducing wasted efforts.
Detailed Professional Insights
Comprehensive data points include job titles, skills, work experience, and education to enable precise segmentation and targeting.
Enriched with insights into decision-making roles, helping you connect directly with small business owners, CEOs, and other key stakeholders.
Business-Specific Information
Covers essential details such as industry, company size, location, and more, enabling you to tailor your campaigns effectively. Ideal for profiling and understanding the unique needs of small businesses.
Continuously Updated Data
Our dataset is maintained and updated regularly to ensure relevance and accuracy in fast-changing market conditions. New business contacts are added frequently, helping you stay ahead of the competition.
Why Choose Success.ai?
At Success.ai, we understand the critical importance of high-quality data for your business success. Here’s why our dataset stands out:
Tailored for Small Business Engagement Focused specifically on North American small business owners, this dataset is an invaluable resource for building relationships with SMEs (Small and Medium Enterprises). Whether you’re targeting startups, local businesses, or established small enterprises, our dataset has you covered.
Comprehensive Coverage Across North America Spanning the United States, Canada, and Mexico, our dataset ensures wide-reaching access to verified small business contacts in the region.
Categories Tailored to Your Needs Includes highly relevant categories such as Small Business Contact Data, CEO Contact Data, B2B Contact Data, and Email Address Data to match your marketing and sales strategies.
Customizable and Flexible Choose from a wide range of filtering options to create datasets that meet your exact specifications, including filtering by industry, company size, geographic location, and more.
Best Price Guaranteed We pride ourselves on offering the most competitive rates without compromising on quality. When you partner with Success.ai, you receive superior data at the best value.
Seamless Integration Delivered in formats that integrate effortlessly with your CRM, marketing automation, or sales platforms, so you can start acting on the data immediately.
Use Cases: This dataset empowers you to:
Drive Sales Growth: Build and refine your sales pipeline by connecting directly with decision-makers in small businesses. Optimize Marketing Campaigns: Launch highly targeted email and phone outreach campaigns with verified contact data. Expand Your Network: Leverage the dataset to build relationships with small business owners and other key figures within the B2B landscape. Improve Data Accuracy: Enhance your existing databases with verified, enriched contact information, reducing bounce rates and increasing ROI. Industries Served: Whether you're in B2B SaaS, digital marketing, consulting, or any field requiring accurate and targeted contact data, this dataset serves industries of all kinds. It is especially useful for professionals focused on:
Lead Generation Business Development Market Research Sales Outreach Customer Acquisition What’s Included in the Dataset: Each profile provides:
Full Name Verified Email Address Phone Number (where available) Job Title Company Name Industry Company Size Location Skills and Professional Experience Education Background With over 170 million profiles, you can tap into a wealth of opportunities to expand your reach and grow your business.
Why High-Quality Contact Data Matters: Accurate, verified contact data is the foundation of any successful B2B strategy. Reaching small business owners and decision-makers directly ensures your message lands where it matters most, reducing costs and improving the effectiveness of your campaigns. By choosing Success.ai, you ensure that every contact in your pipeline is a genuine opportunity.
Partner with Success.ai for Better Data, Better Results: Success.ai is committed to delivering premium-quality B2B data solutions at scale. With our small business owner dataset, you can unlock the potential of North America's dynamic small business market.
Get Started Today Request a sample or customize your dataset to fit your unique...
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Preliminary Information Only: Files will be updated upon the article’s acceptance by Sensors.The attached dataset contains over 17.5 hours of experimental sensor data, including measurements from the following sensors:- Front axle steering angle [°]- Longitudinal acceleration [g]- Lateral acceleration [g]- Yaw rate [deg/s]- Wheel speed (front left) [km/h]- Wheel speed (front right) [km/h]- Wheel speed (rear left) [km/h]- Wheel speed (rear right) [km/h]Data was sampled at a rate of 0.01 seconds and includes three distinct driving scenarios: calm driving, aggressive driving, and city driving. The dataset also captures variations such as reduced tire pressure (one tire at a time), different passenger loads, and measurements from three different vehicles.The data was collected at the Continental Test Track in Veszprém, Hungary, as well as within the city of Veszprém.The data is stored in Apache Parquet format that can be processed via Pandas library in Python.For more information please check our article:Sensitivity Analysis of Long Short-Term Memory-based Neural Network Model for Vehicle Yaw Rate Prediction @MPDI Sensors
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This table covers investments of institutional investors from 1950 onwards. It enables analyzing shifts over time in the investment portfolio of institutional investors. This is possible for the total of institutional investors, and for each of the three groups: pension funds, insurance corporations and investment funds.
Data available from 1950 to 2012.
Status of the figures: The figures in this table are up to 2010 definitive, figures for 2011 are revised provisional figures and figures for 2012 are provisional. Because this table is discontinued, figures will not be updated anymore.
Changes as of 18 December 2014: None, this table is discontinued.
When will new figures be published? Not applicable anymore. This table is replaced by table Institutional investors; short-term and long-term investments. See paragraph 3.
https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
1) Despite the central importance for life-history theory, egg-size effects on offspring fitness are still considered ambiguous. Most previous studies were only observational and consequently might suffer from uncontrolled correlations between egg size and parental/territory quality. Even after cross-fostering is performed, direct genetic effects and parental adjustment of postnatal care might confound our estimates of egg-size effects per se. 2) I performed a full cross-fostering experiment in the collared flycatcher (Ficedula albicollis) exchanging the whole clutches between pairs of nests. I statistically controlled for direct genetic effects and parental feeding frequencies. I followed young until recruitment to estimate the long-term effects of egg size and parental provisioning. In addition, I compared the effects obtained in the cross-fostering experiment with those obtained from a set of unmanipulated nests. 3) Egg size per se affected offspring morphology in both the short- and long-term, while having no effect on offspring survival and immunity. Egg-size effects were not confounded by parental postnatal care and direct genetic effects. 4) The number of care-givers was an influential predictor of nestling performance. Apart from the variation caused by this factor, feeding frequencies had no consistent effect on offspring performance. 5) Fitness benefits of large eggs may be difficult to establish due to variation of egg-size effects between years and habitats. Feeding frequency may affect offspring state but offspring state may also affect feeding frequency. Varying causality between feeding rate and offspring state may preclude the detection of a positive effect of the former on the latter.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Brazil Banco do Estado do Para SA: Liabilities: Short and Long Term Liabilities data was reported at 6,493,044.000 BRL th in Mar 2019. This records an increase from the previous number of 5,903,997.000 BRL th for Dec 2018. Brazil Banco do Estado do Para SA: Liabilities: Short and Long Term Liabilities data is updated quarterly, averaging 1,563,568.000 BRL th from Mar 2000 (Median) to Mar 2019, with 77 observations. The data reached an all-time high of 6,493,044.000 BRL th in Mar 2019 and a record low of 419,747.000 BRL th in Dec 2000. Brazil Banco do Estado do Para SA: Liabilities: Short and Long Term Liabilities data remains active status in CEIC and is reported by Central Bank of Brazil. The data is categorized under Brazil Premium Database’s Banking Sector – Table BR.KBB033: Commercial Banks: Assets and Liabilities: Banco do Estado do Para SA.
The Massachusetts Office of Coastal Zone Management launched the Shoreline Change Project in 1989 to identify erosion-prone areas of the coast and support local land-use decisions. Trends of shoreline position over long and short-term timescales provide information to landowners, managers, and potential buyers about possible future impacts to coastal resources and infrastructure. In 2001, a 1994 shoreline was added to calculate both long- and short-term shoreline change rates along ocean-facing sections of the Massachusetts coast. In 2013 two oceanfront shorelines for Massachusetts were added using 2008-2009 color aerial orthoimagery and 2007 topographic lidar datasets obtained from NOAA's Ocean Service, Coastal Services Center. In 2018, two new mean high water (MHW) shorelines for the Massachusetts coast extracted from lidar data between 2010-2014 were added to the dataset. This 2021 data release includes rates that incorporate one new shoreline extracted from 2018 lidar data collected by the U.S. Army Corps of Engineers (USACE) Joint Airborne Lidar Bathymetry Technical Center of Expertise (JALBTCX), added to the existing database of all historical shorelines (1844-2014), for the North Shore, South Shore, Cape Cod Bay, Outer Cape, Buzzard’s Bay, South Cape, Nantucket, and Martha’s Vineyard. 2018 lidar data did not cover the Boston or Elizabeth Islands regions. Included in this data release is a proxy-datum bias reference line that accounts for the positional difference in a proxy shoreline (like a High Water Line shoreline) and a datum shoreline (like a Mean High Water shoreline. This issue is explained further in Ruggiero and List (2009) and in the process steps of the metadata associated with the rates. This release includes both long-term (~150+ years) and short term (~30 years) rates. Files associated with the long-term rates have "LT"; in their names, files associated with short-term rates have "ST"; in their names.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Using comprehensive high-frequency state and local sales tax data, we show that shopping behavior responds strongly to changes in sales tax rates. Even though sales taxes are not observed in posted prices and have a wide range of rates and exemptions, consumers adjust in many dimensions. They stock up on storable goods before taxes rise and increase online and cross-border shopping in both the short and long run. The difference between short- and long-run spending responses has important implications for the efficacy of using sales taxes for counter-cyclical policy and for the design of an optimal tax framework. Interestingly, households adjust spending similarly for both taxable and tax-exempt goods. We embed an inventory problem into a continuous-time consumption-savings model and demonstrate that this behavior is optimal in the presence of shopping trip fixed costs. The model successfully matches estimated short-run and long-run tax elasticities. We provide additional evidence in favor of this new shopping-complementarity mechanism.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is replication code and data for the paper "The Long and Short (Run) Of Trade Elasticities."Abstract: We propose a novel approach to estimate the trade elasticity at various horizons. When countries change Most Favored Nation (MFN) tariffs, partners that trade on MFN terms experience plausibly exogenous tariff changes. The differential effects on imports from these countries relative to a control group – countries not subject to the MFN tariff scheme – can be used to identify the trade elasticity. We build a panel dataset combining information on product-level tariffs and trade flows covering 1995-2018, and estimate the trade elasticity at short and long horizons usinglocal projections (Jordà, 2005). Our main findings are that the elasticity of tariff-exclusive trade flows in the year following the exogenous tariff change is about −0.76, and the long-run elasticity ranges from −1.75 to −2.25. Our long-run estimates are smaller than typical in the literature, and it takes 7-10 years to converge to the long run, implying that (i) the welfare gains from trade are high and (ii) there are substantial convexities in the costs of adjusting export participation.