https://www.researchnester.comhttps://www.researchnester.com
The global variable data printing market is set to rise from USD 11.02 billion in 2024 to USD 80.25 billion by 2037, exhibiting a CAGR of more than 16.5% during the forecast timeline, between 2025 and 2037. Key industry players include Canon, Inc., 3M Company, Xerox Corporation, among others.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Footnote: (f) denotes a categorical variable, (c) a continuous covariate and (n) a nominal variable.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The data and Matlab code can be found that support the findings of the master's thesis "Evaluation of the handling of a variable dynamics tilting tricycle".
The objective of the experimental study is to find the configuration of the tilt mechanism of the Dressel tilting tricycle with the optimal handling performance. The matlab code is used to calculate the handling performance from raw data, obtained by gyroscopes. A slalom manoeuvre and a low-speed line following manoeuvre have been performed and the code supplies the processing methods of the data. The results of the repeated trials can be found in the datasets. Also the velocity of the different vehicles can be found in the datasets. Another matlab file is present that was used to optimize the dimensions of the tilt mechanism for a larger tilt limit with a simplified model of the tricycle.
This data set provides information on outstanding New York city bonds, interest rate exchange agreements, and projected debt service on those bonds
U.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
Among various approaches for implementing prognostic algorithms data-driven algorithms are popular in the industry due to their intuitive nature and relatively fast developmental cycle. However, no matter how easy it may seem, there are several pitfalls that one must watch out for while developing a data-driven prognostic algorithm. One such pitfall is the uncertainty inherent in the system. At each processing step uncertainties get compounded and can grow beyond control in predictions if not carefully managed during the various steps of the algorithms. This paper presents analysis from our preliminary development of data- driven algorithm for predicting end of discharge of Li-ion batteries using constant load experiment data and challenges faced when applying these algorithms to randomized variable loading profile as is the case in realistic applications. Lessons learned during the development phase are presented.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This report describes support for a new type of variable-width line in the 'vwline' package for R that is based on Bezier curves. There is also a new function for specifying the width of a variable-width line based on Bezier curves and there is a new linejoin and lineend style, called "extend", that is available when both the line and the width of the line are based on Bezier curves. This report also introduces a small 'gridBezier' package for drawing Bezier curves in R.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This dataset contains simulated datasets, empirical data, and R scripts described in the paper: "Li, Q. and Kou, X. (2021) WiBB: An integrated method for quantifying the relative importance of predictive variables. Ecography (DOI: 10.1111/ecog.05651)".
A fundamental goal of scientific research is to identify the underlying variables that govern crucial processes of a system. Here we proposed a new index, WiBB, which integrates the merits of several existing methods: a model-weighting method from information theory (Wi), a standardized regression coefficient method measured by ß* (B), and bootstrap resampling technique (B). We applied the WiBB in simulated datasets with known correlation structures, for both linear models (LM) and generalized linear models (GLM), to evaluate its performance. We also applied two other methods, relative sum of wight (SWi), and standardized beta (ß*), to evaluate their performance in comparison with the WiBB method on ranking predictor importances under various scenarios. We also applied it to an empirical dataset in a plant genus Mimulus to select bioclimatic predictors of species' presence across the landscape. Results in the simulated datasets showed that the WiBB method outperformed the ß* and SWi methods in scenarios with small and large sample sizes, respectively, and that the bootstrap resampling technique significantly improved the discriminant ability. When testing WiBB in the empirical dataset with GLM, it sensibly identified four important predictors with high credibility out of six candidates in modeling geographical distributions of 71 Mimulus species. This integrated index has great advantages in evaluating predictor importance and hence reducing the dimensionality of data, without losing interpretive power. The simplicity of calculation of the new metric over more sophisticated statistical procedures, makes it a handy method in the statistical toolbox.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
IntroductionThere is a need to develop harmonized procedures and a Minimum Data Set (MDS) for cross-border Multi Casualty Incidents (MCI) in medical emergency scenarios to ensure appropriate management of such incidents, regardless of place, language and internal processes of the institutions involved. That information should be capable of real-time communication to the command-and-control chain. It is crucial that the models adopted are interoperable between countries so that the rights of patients to cross-border healthcare are fully respected.ObjectiveTo optimize management of cross-border Multi Casualty Incidents through a Minimum Data Set collected and communicated in real time to the chain of command and control for each incident. To determine the degree of agreement among experts.MethodWe used the modified Delphi method supplemented with the Utstein technique to reach consensus among experts. In the first phase, the minimum requirements of the project, the profile of the experts who were to participate, the basic requirements of each variable chosen and the way of collecting the data were defined by providing bibliography on the subject. In the second phase, the preliminary variables were grouped into 6 clusters, the objectives, the characteristics of the variables and the logistics of the work were approved. Several meetings were held to reach a consensus to choose the MDS variables using a Modified Delphi technique. Each expert had to score each variable from 1 to 10. Non-voting variables were eliminated, and the round of voting ended. In the third phase, the Utstein Style was applied to discuss each group of variables and choose the ones with the highest consensus. After several rounds of discussion, it was agreed to eliminate the variables with a score of less than 5 points. In phase four, the researchers submitted the variables to the external experts for final assessment and validation before their use in the simulations. Data were analysed with SPSS Statistics (IBM, version 2) software.ResultsSix data entities with 31 sub-entities were defined, generating 127 items representing the final MDS regarded as essential for incident management. The level of consensus for the choice of items was very high and was highest for the category ‘Incident’ with an overall kappa of 0.7401 (95% CI 0.1265–0.5812, p 0.000), a good level of consensus in the Landis and Koch model. The items with the greatest degree of consensus at ten were those relating to location, type of incident, date, time and identification of the incident. All items met the criteria set, such as digital collection and real-time transmission to the chain of command and control.ConclusionsThis study documents the development of a MDS through consensus with a high degree of agreement among a group of experts of different nationalities working in different fields. All items in the MDS were digitally collected and forwarded in real time to the chain of command and control. This tool has demonstrated its validity in four large cross-border simulations involving more than eight countries and their emergency services.
This repository contains data from a study to investigate possible effects of vection on postural control. The data contain two variables (anterior-posterior displacement and total length of trajectory) describing posture, and three variables (rating, latency and duration) describing vection. The experiment used three visual field conditions and five amplitude conditions; posture and vection variables are available for each condition. See Horiuchi et al. (2021) for details.
The data set comprises posture and vection data collected from 19 participants using a Wii fit board and Wii remote. The variables presented in the data set have been calculated from the raw data (see technical document for information). The experiment used 15 experimental conditions, and the variables are presented for each participant in each condition.
This dataset contains tabular files with information about the usage preferences of speakers of Maltese English with regard to 63 pairs of lexical expressions. These pairs (e.g. truck-lorry or realization-realisation) are known to differ in usage between BrE and AmE (cf. Algeo 2006). The data were elicited with a questionnaire that asks informants to indicate whether they always use one of the two variants, prefer one over the other, have no preference, or do not use either expression (see Krug and Sell 2013 for methodological details). Usage preferences were therefore measured on a symmetric 5-point ordinal scale. Data were collected between 2008 to 2018, as part of a larger research project on lexical and grammatical variation in settings where English is spoken as a native, second, or foreign language. The current dataset, which we use for our methodological study on ordinal data modeling strategies, consists of a subset of 500 speakers that is roughly balanced on year of birth. Abstract: Related publication In empirical work, ordinal variables are typically analyzed using means based on numeric scores assigned to categories. While this strategy has met with justified criticism in the methodological literature, it also generates simple and informative data summaries, a standard often not met by statistically more adequate procedures. Motivated by a survey of how ordered variables are dealt with in language research, we draw attention to an un(der)used latent-variable approach to ordinal data modeling, which constitutes an alternative perspective on the most widely used form of ordered regression, the cumulative model. Since the latent-variable approach does not feature in any of the studies in our survey, we believe it is worthwhile to promote its benefits. To this end, we draw on questionnaire-based preference ratings by speakers of Maltese English, who indicated on a 5-point scale which of two synonymous expressions (e.g. package-parcel) they (tend to) use. We demonstrate that a latent-variable formulation of the cumulative model affords nuanced and interpretable data summaries that can be visualized effectively, while at the same time avoiding limitations inherent in mean response models (e.g. distortions induced by floor and ceiling effects). The online supplementary materials include a tutorial for its implementation in R.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Empirical Type I, 8 variables, two-sided tests.
The U.S. Geological Survey (USGS) has developed and implemented an algorithm that identifies burned areas in temporally-dense time series of Landsat image stacks to produce the Landsat Burned Area Essential Climate Variable (BAECV) products. The algorithm makes use of predictors derived from individual Landsat scenes, lagged reference conditions, and change metrics between the scene and reference conditions. Outputs of the BAECV algorithm consist of pixel-level burn probabilities for each Landsat scene, and annual burn probability, burn classification, and burn date composites. These products were generated for the conterminous United States for 1984 through 2015. These data are also available for download at https://gsc.cr.usgs.gov/outgoing/baecv/BAECV_CONUS_v1.1_2017/ Additional details about the algorithm used to generate these products are described in Hawbaker, T.J., Vanderhoof, M.K., Beal, Y.G., Takacs, J.D., Schmidt, G.L., Falgout, J.T., Williams, B., Brunner, N.M., Caldwell, M.K., Picotte, J.J., Howard, S.M., Stitt, S., and Dwyer, J.L., 2017. Mapping burned areas using dense time-series of Landsat data. Remote Sensing of Environment 198, 504522. doi:10.1016/j.rse.2017.06.027 First release: 2017 Revised: September 2017 (ver.1.1)
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
this folder contains two examples of pagure datasets, corresponding to three surveys:-cgfs conducted in 2018 in the english channel (northeast atlantic)-epibengol conducted in 2019 in the gulf of lion (western mediterranean)-evhoe conducted in 2020 in the bay of biscay and celtic shelf (northeast atlantic)files include metadata for the sampling stations, annotation files. a readme tex file contains the links to the voyage metadatathis folder is aimed at providing an example of documented underwater imagery dataset.these data are part of the data exchange conducted in the quatrea collaboration between the french research institute for the exploitation of the sea (ifremer), the commonwealth scientific and industrial research organisation (csiro), and the university of tasmania (utas).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
20 Global import shipment records of Variable Resistance with prices, volume & current Buyer's suppliers relationships based on actual Global export trade database.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
106 Global import shipment records of Variable Frequency Drive with prices, volume & current Buyer's suppliers relationships based on actual Global export trade database.
This layer shows median earnings by occupational group. This is shown by tract, county, and state boundaries. This service is updated annually to contain the most currently released American Community Survey (ACS) 5-year data, and contains estimates and margins of error. There are also additional calculated attributes related to this topic, which can be mapped or used within analysis. Only full-time year-round workers included. Median earnings is based on earnings in past 12 months of survey. Occupation Groups based on Bureau of Labor Statistics (BLS)' Standard Occupation Classification (SOC). This layer is symbolized to show median earnings of the full-time, year-round civilian employed population. To see the full list of attributes available in this service, go to the "Data" tab, and choose "Fields" at the top right. Current Vintage: 2019-2023ACS Table(s): B24021Data downloaded from: Census Bureau's API for American Community Survey Date of API call: December 12, 2024National Figures: data.census.govThe United States Census Bureau's American Community Survey (ACS):About the SurveyGeography & ACSTechnical DocumentationNews & UpdatesThis ready-to-use layer can be used within ArcGIS Pro, ArcGIS Online, its configurable apps, dashboards, Story Maps, custom apps, and mobile apps. Data can also be exported for offline workflows. For more information about ACS layers, visit the FAQ. Please cite the Census and ACS when using this data.Data Note from the Census:Data are based on a sample and are subject to sampling variability. The degree of uncertainty for an estimate arising from sampling variability is represented through the use of a margin of error. The value shown here is the 90 percent margin of error. The margin of error can be interpreted as providing a 90 percent probability that the interval defined by the estimate minus the margin of error and the estimate plus the margin of error (the lower and upper confidence bounds) contains the true value. In addition to sampling variability, the ACS estimates are subject to nonsampling error (for a discussion of nonsampling variability, see Accuracy of the Data). The effect of nonsampling error is not represented in these tables.Data Processing Notes:This layer is updated automatically when the most current vintage of ACS data is released each year, usually in December. The layer always contains the latest available ACS 5-year estimates. It is updated annually within days of the Census Bureau's release schedule. Click here to learn more about ACS data releases.Boundaries come from the US Census TIGER geodatabases, specifically, the National Sub-State Geography Database (named tlgdb_(year)_a_us_substategeo.gdb). Boundaries are updated at the same time as the data updates (annually), and the boundary vintage appropriately matches the data vintage as specified by the Census. These are Census boundaries with water and/or coastlines erased for cartographic and mapping purposes. For census tracts, the water cutouts are derived from a subset of the 2020 Areal Hydrography boundaries offered by TIGER. Water bodies and rivers which are 50 million square meters or larger (mid to large sized water bodies) are erased from the tract level boundaries, as well as additional important features. For state and county boundaries, the water and coastlines are derived from the coastlines of the 2023 500k TIGER Cartographic Boundary Shapefiles. These are erased to more accurately portray the coastlines and Great Lakes. The original AWATER and ALAND fields are still available as attributes within the data table (units are square meters).The States layer contains 52 records - all US states, Washington D.C., and Puerto RicoCensus tracts with no population that occur in areas of water, such as oceans, are removed from this data service (Census Tracts beginning with 99).Percentages and derived counts, and associated margins of error, are calculated values (that can be identified by the "_calc_" stub in the field name), and abide by the specifications defined by the American Community Survey.Field alias names were created based on the Table Shells file available from the American Community Survey Summary File Documentation page.Negative values (e.g., -4444...) have been set to null, with the exception of -5555... which has been set to zero. These negative values exist in the raw API data to indicate the following situations:The margin of error column indicates that either no sample observations or too few sample observations were available to compute a standard error and thus the margin of error. A statistical test is not appropriate.Either no sample observations or too few sample observations were available to compute an estimate, or a ratio of medians cannot be calculated because one or both of the median estimates falls in the lowest interval or upper interval of an open-ended distribution.The median falls in the lowest interval of an open-ended distribution, or in the upper interval of an open-ended distribution. A statistical test is not appropriate.The estimate is controlled. A statistical test for sampling variability is not appropriate.The data for this geographic area cannot be displayed because the number of sample cases is too small.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Chudik, Kapetanios, & Pesaran (Econometrica 2018, 86, 1479-1512) propose a one covariate at a time, multiple testing (OCMT) approach to variable selection in high-dimensional linear regression models as an alternative approach to penalised regression. We offer a narrow replication of their key OCMT results based on the Stata software instead of the original MATLAB routines. Using the new user-written Stata commands baing and ocmt, we find results that match closely those reported by these authors in their Monte Carlo simulations. In addition, we replicate exactly their findings in the empirical illustration, which relate to top five variables with highest inclusion frequencies based on the OCMT selection method.
MacroconomicData.csv - contains relevant US macro data capitalparameters.csv, finalgoodparameters.csv, laborparameters.csv, oneshiftparameters.csv - contain the parameter distributions generated by Metropolis-Hastings for the model. DataandPrograms.zip - contains all MATLAB files used. See the Readme for details about how to run these files.
This layer contains 2010-2014 American Community Survey (ACS) 5-year data, and contains estimates and margins of error. The layer shows median household income by race and by age of householder. This is shown by tract, county, and state boundaries. There are also additional calculated attributes related to this topic, which can be mapped or used within analysis. Median income and income source is based on income in past 12 months of survey. This layer is symbolized to show median household income. To see the full list of attributes available in this service, go to the "Data" tab, and choose "Fields" at the top right. Vintage: 2010-2014ACS Table(s): B19013B, B19013C, B19013D, B19013E, B19013F, B19013G, B19013H, B19013I, B19049, B19053 Data downloaded from: Census Bureau's API for American Community Survey Date of API call: November 28, 2020National Figures: data.census.govThe United States Census Bureau's American Community Survey (ACS):About the SurveyGeography & ACSTechnical DocumentationNews & UpdatesThis ready-to-use layer can be used within ArcGIS Pro, ArcGIS Online, its configurable apps, dashboards, Story Maps, custom apps, and mobile apps. Data can also be exported for offline workflows. For more information about ACS layers, visit the FAQ. Please cite the Census and ACS when using this data.Data Note from the Census:Data are based on a sample and are subject to sampling variability. The degree of uncertainty for an estimate arising from sampling variability is represented through the use of a margin of error. The value shown here is the 90 percent margin of error. The margin of error can be interpreted as providing a 90 percent probability that the interval defined by the estimate minus the margin of error and the estimate plus the margin of error (the lower and upper confidence bounds) contains the true value. In addition to sampling variability, the ACS estimates are subject to nonsampling error (for a discussion of nonsampling variability, see Accuracy of the Data). The effect of nonsampling error is not represented in these tables.Data Processing Notes:This layer has associated layers containing the most recent ACS data available by the U.S. Census Bureau. Click here to learn more about ACS data releases and click here for the associated boundaries layer. The reason this data is 5+ years different from the most recent vintage is due to the overlapping of survey years. It is recommended by the U.S. Census Bureau to compare non-overlapping datasets.Boundaries come from the US Census TIGER geodatabases. Boundary vintage (2014) appropriately matches the data vintage as specified by the Census. These are Census boundaries with water and/or coastlines clipped for cartographic purposes. For census tracts, the water cutouts are derived from a subset of the 2010 AWATER (Area Water) boundaries offered by TIGER. For state and county boundaries, the water and coastlines are derived from the coastlines of the 500k TIGER Cartographic Boundary Shapefiles. The original AWATER and ALAND fields are still available as attributes within the data table (units are square meters). The States layer contains 52 records - all US states, Washington D.C., and Puerto RicoCensus tracts with no population that occur in areas of water, such as oceans, are removed from this data service (Census Tracts beginning with 99).Percentages and derived counts, and associated margins of error, are calculated values (that can be identified by the "_calc_" stub in the field name), and abide by the specifications defined by the American Community Survey.Field alias names were created based on the Table Shells file available from the American Community Survey Summary File Documentation page.Negative values (e.g., -4444...) have been set to null, with the exception of -5555... which has been set to zero. These negative values exist in the raw API data to indicate the following situations:The margin of error column indicates that either no sample observations or too few sample observations were available to compute a standard error and thus the margin of error. A statistical test is not appropriate.Either no sample observations or too few sample observations were available to compute an estimate, or a ratio of medians cannot be calculated because one or both of the median estimates falls in the lowest interval or upper interval of an open-ended distribution.The median falls in the lowest interval of an open-ended distribution, or in the upper interval of an open-ended distribution. A statistical test is not appropriate.The estimate is controlled. A statistical test for sampling variability is not appropriate.The data for this geographic area cannot be displayed because the number of sample cases is too small.
https://www.researchnester.comhttps://www.researchnester.com
The global variable data printing market is set to rise from USD 11.02 billion in 2024 to USD 80.25 billion by 2037, exhibiting a CAGR of more than 16.5% during the forecast timeline, between 2025 and 2037. Key industry players include Canon, Inc., 3M Company, Xerox Corporation, among others.