Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset comprises sea surface height (SSH) and velocity data at the ocean surface in two small regions near the Agulhas retroflection. The unfiltered SSH and a horizontal velocity field are provided, along with the same fields after various kinds of filtering, as described in the accompanying manuscript, Using Lagrangian filtering to remove waves from the ocean surface velocity field (https://doi.org/10.31223/X5D352). The code repository for this work is https://github.com/cspencerjones/separating-balanced .
Two time-resolutions are provided: two weeks of hourly data and 70 days of daily data.
Seventy_daysA.nc contains daily data for region A and Seventy_daysB.nc contains daily data for region B, including unfiltered, lagrangian filtered and omega-filtered velocity and sea-surface height.
two_weeksA.nc contains hourly data for region A and two_weeksB.nc contains hourly data for region B, including unfiltered and lagrangian filtered velocity and sea-surface height.
Note that region A has been moved in version 2 of this dataset.
See the manuscript and code repository for more information.
This work was supported by NASA award 80NSSC20K1142.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset provides detailed information on the performance and efficiency of air filters installed in various locations such as shopping malls and hospital ventilation systems. It captures critical parameters like filter type, age, load, pressure drop, and efficiency over time. The dataset also includes measurements of particulate matter (PM2.5 and PM10) concentrations at both the inlet and outlet of the filters, offering insights into how effectively each filter is removing harmful particles from the air. Additionally, it tracks whether the filter requires replacement and flags any anomalies in its performance. This data is valuable for monitoring air quality, optimizing filter maintenance schedules, and ensuring optimal air filtration across different environments.
Facebook
TwitterEsri's ArcGIS Online tools provide three methods of filtering larger datasets using attribute or geospatial information that are a part of each individual dataset. These instructions provide a basic overview of the step a GeoHub end user can take to filter out unnecessary data or to specifically hone in a particular location to find data related to this location and download the specific information filtered through the search bar, as seen on the map or using the attribute filters in the Data tab.
Facebook
TwitterA data science project's primary objective is to analyze and train the data in preparation for the relevant machine learning project. Gathering the necessary data from the beauty domain is a crucial step to provide accurate results for the machine learning project. To ensure that the data gathered is sufficient and relevant, it is vital to identify the appropriate data sources and analyze them. Homemade remedy recipes are becoming increasingly popular around the world. There are numerous remedy recipe videos available on YouTube and Google. The information provided above is required to recommend a remedy based on the conditions. The data set contains 18 different types of skin conditions that were identified by the user through surveys.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In the streaming data setting, where data arrive continuously or in frequent batches and there is no pre-determined amount of total data, Bayesian models can employ recursive updates, incorporating each new batch of data into the model parameters’ posterior distribution. Filtering methods are currently used to perform these updates efficiently, however, they suffer from eventual degradation as the number of unique values within the filtered samples decreases. We propose Generative Filtering, a method for efficiently performing recursive Bayesian updates in the streaming setting. Generative Filtering retains the speed of a filtering method while using parallel updates to avoid degenerate distributions after repeated applications. We derive rates of convergence for Generative Filtering and conditions for the use of sufficient statistics instead of fully storing all past data. We investigate the alleviation of filtering degradation through simulation and an ecological time series of counts. Supplementary materials for this article are available online.
Facebook
TwitterModel-based prognostics approaches use domain knowledge about a system and its failure modes through the use of physics-based models. Model-based prognosis is generally divided into two sequential problems: a joint state-parameter estimation problem, in which, using the model, the health of a system or component is determined based on the observations; and a prediction problem, in which, using the model, the state-parameter distribution is simulated forward in time to compute end of life and remaining useful life. The first problem is typically solved through the use of a state observer, or filter. The choice of filter depends on the assumptions that may be made about the system, and on the desired algorithm performance. In this paper, we review three separate filters for the solution to the first problem: the Daum filter, an exact nonlinear filter; the unscented Kalman filter, which approximates nonlinearities through the use of a deterministic sampling method known as the unscented transform; and the particle filter, which approximates the state distribution using a finite set of discrete, weighted samples, called particles. Using a centrifugal pump as a case study, we conduct a number of simulation-based experiments investigating the performance of the different algorithms as applied to prognostics.
Facebook
TwitterContains scans of a bin filled with different parts ( screws, nuts, rods, spheres, sprockets). For each part type, RGB image and organized 3D point cloud obtained with structured light sensor are provided. In addition, unorganized 3D point cloud representing an empty bin and a small Matlab script to read the files is also provided. 3D data contain a lot of outliers and the data were used to demonstrate a new filtering technique.
Facebook
TwitterFilter is a configurable app template that displays a map with an interactive filtered view of one or more feature layers. The application displays prompts and hints for attribute filter values which are used to locate specific features.Use CasesFilter displays an interactive dialog box for exploring the distribution of a single attribute or the relationship between different attributes. This is a good choice when you want to understand the distribution of different types of features within a layer, or create an experience where you can gain deeper insight into how the interaction of different variables affect the resulting map content.Configurable OptionsFilter can present a web map and be configured with the following options:Choose the web map used in the application.Provide a title and color theme. The default title is the web map name.Configure the ability for feature and location search.Define the filter experince and provide text to encourage user exploration of data by displaying additional values to choose as the filter text.Supported DevicesThis application is responsively designed to support use in browsers on desktops, mobile phones, and tablets.Data RequirementsRequires at least one layer with an interactive filter. See Apply Filters help topic for more details.Get Started This application can be created in the following ways:Click the Create a Web App button on this pageShare a map and choose to Create a Web AppOn the Content page, click Create - App - From Template Click the Download button to access the source code. Do this if you want to host the app on your own server and optionally customize it to add features or change styling.
Facebook
Twitterhttps://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Phylogenetic inference is generally performed on the basis of multiple sequence alignments (MSA). Because errors in an alignment can lead to errors in tree estimation, there is a strong interest in identifying and removing unreliable parts of the alignment. In recent years several automated filtering approaches have been proposed, but despite their popularity, a systematic and comprehensive comparison of different alignment filtering methods on real data has been lacking. Here, we extend and apply recently introduced phylogenetic tests of alignment accuracy on a large number of gene families and contrast the performance of unfiltered versus filtered alignments in the context of single-gene phylogeny reconstruction. Based on multiple genome-wide empirical and simulated data sets, we show that the trees obtained from filtered MSAs are on average worse than those obtained from unfiltered MSAs. Furthermore, alignment filtering often leads to an increase in the proportion of well-supported branches that are actually wrong. We confirm that our findings hold for a wide range of parameters and methods. Although our results suggest that light filtering (up to 20% of alignment positions) has little impact on tree accuracy and may save some computation time, contrary to widespread practice, we do not generally recommend the use of current alignment filtering methods for phylogenetic inference. By providing a way to rigorously and systematically measure the impact of filtering on alignments, the methodology set forth here will guide the development of better filtering algorithms.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Background: The accuracy of microbial community detection in 16S rRNA marker-gene and metagenomic studies suffers from contamination and sequencing errors that lead to either falsely identifying microbial taxa that were not in the sample or misclassifying the taxa of DNA fragment reads. Removing contaminants and filtering rare features are two common approaches to deal with this problem. While contaminant detection methods use auxiliary sequencing process information to identify known contaminants, filtering methods remove taxa that are present in a small number of samples and have small counts in the samples where they are observed. The latter approach reduces the extreme sparsity of microbiome data and has been shown to correctly remove contaminant taxa in cultured “mock” datasets, where the true taxa compositions are known. Although filtering is frequently used, careful evaluation of its effect on the data analysis and scientific conclusions remains unreported. Here, we assess the effect of filtering on the alpha and beta diversity estimation as well as its impact on identifying taxa that discriminate between disease states.Results: The effect of filtering on microbiome data analysis is illustrated on four datasets: two mock quality control datasets where the same cultured samples with known microbial composition are processed at different labs and two disease study datasets. Results show that in microbiome quality control datasets, filtering reduces the magnitude of differences in alpha diversity and alleviates technical variability between labs while preserving the between samples similarity (beta diversity). In the disease study datasets, DESeq2 and linear discriminant analysis Effect Size (LEfSe) methods were used to identify taxa that are differentially abundant across groups of samples, and random forest models were used to rank features with the largest contribution toward disease classification. Results reveal that filtering retains significant taxa and preserves the model classification ability measured by the area under the receiver operating characteristic curve (AUC). The comparison between the filtering and the contaminant removal method shows that they have complementary effects and are advised to be used in conjunction.Conclusions: Filtering reduces the complexity of microbiome data while preserving their integrity in downstream analysis. This leads to mitigation of the classification methods' sensitivity and reduction of technical variability, allowing researchers to generate more reproducible and comparable results in microbiome data analysis.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset comprises sea surface height (SSH) and velocity data at the ocean surface in two small regions near the Agulhas retroflection. The unfiltered SSH and a horizontal velocity field are provided, along with the same fields after various kinds of filtering, as described in the accompanying manuscript, Separating balanced and unbalanced flow at the surface of the Agulhas region using Lagrangian filtering. The code repository for this work is https://github.com/cspencerjones/separating-balanced .
Two time-resolutions are provided: two weeks of hourly data and 70 days of daily data. See the manuscript for more information.
This work was supported by NASA award 80NSSC20K1142.
Facebook
TwitterMany diagnostic datasets suffer from the adverse effects of spikes that are embedded in data and noise. For example, this is true for electrical power system data where the switches, relays, and inverters are major contributors to these effects. Spikes are mostly harmful to the analysis of data in that they throw off real-time detection of abnormal conditions, and classification of faults. Since noise and spikes are mixed together and embedded within the data, removal of the unwanted signals from the data is not always easy and may result in losing the integrity of the information carried by the data. Additionally, in some applications noise and spikes need to be filtered independently. The proposed algorithm is a multi-resolution filtering approach based on Haar wavelets that is capable of removing spikes while incurring insignificant damage to other data. In particular, noise in the data, which is a useful indicator that a sensor is healthy and not stuck, can be preserved using our approach. Presented here is the theoretical background with some examples from a realistic testbed.
Facebook
TwitterParticle filters (PF) have been established as the de facto state of the art in failure prognosis. They combine advantages of the rigors of Bayesian estimation to nonlinear prediction while also providing uncertainty estimates with a given solution. Within the context of particle filters, this paper introduces several novel methods for uncertainty representations and uncertainty management. The prediction uncertainty is modeled via a rescaled Epanechnikov kernel and is assisted with resampling techniques and regularization algorithms. Uncertainty management is accomplished through parametric adjustments in a feedback correction loop of the state model and its noise distributions. The correction loop provides the mechanism to incorporate information that can improve solution accuracy and reduce uncertainty bounds. In addition, this approach results in reduction in computational burden. The scheme is illustrated with real vibration feature data from a fatigue-driven fault in a critical aircraft component.
Facebook
TwitterThis paper presents a sliding window constrained fault-tolerant filtering method for sampling data in petrochemical instrumentation. The method requires the design of an appropriate sliding window width based on the time series, as well as the expansion of both ends of the series. By utilizing a sliding window constraint function, the method produces a smoothed estimate for the current moment within the window. As the window advances, a series of smoothed estimates of the original sampled data is generated. Subsequently, the original series is subtracted from this smoothed estimate to create a new series that represents the differences between the two. This difference series is then subjected to an additional smoothing estimation process, and the resulting smoothed estimates are employed to compensate for the smoothed estimates of original sampled series. The experimental results indicate that, compared with sliding mean filtering, sliding median filtering, and Savitzky-Golay filtering,..., , , # Sliding window constrained fault-tolerant filtering of compressor vibration data
https://doi.org/10.5061/dryad.pc866t20z
Data type
Files containing ‘fdata1case1’ in the file represents the case "1" of the location of the outlier in the measured data "1", and so on;
Files containing ‘fwavedata’ in the file name are wave signals with outliers;
Files containing ‘fwave2data’ in the file name are polynomial signals with outliers;
Files containing ‘normaldata’ in the file name are normal measured data;
Files containing ‘normalwavedata’ in the file name are normal wave signals;
Files containing ‘normalwave2data’ in the file name are normal polynomial signals;
Files containing ‘ftffiltered’ in the file name indicate that the data have been processed by sliding-window constrained error-tolerant filtering;
Files containing ‘sgfiltered’ in the file name indicate data after Savitzky-Golay filtering...
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Recommender System is a system that seeks to predict or filter preferences according to the user’s choices. Recommender systems are utilized in a variety of areas including movies, music, news, books, research articles, search queries, social tags, and products in general.
Recommender systems produce a list of recommendations in any of the two ways:
1. Collaborative filtering: Collaborative filtering approaches build a model from the user’s past behavior (i.e. items purchased or searched by the user) as well as similar decisions made by other users. This model is then used to predict items (or ratings for items) that users may have an interest in. 2. Content-based filtering: Content-based filtering approaches uses a series of discrete characteristics of an item in order to recommend additional items with similar properties. Content-based filtering methods are totally based on a description of the item and a profile of the user’s preferences. It recommends items based on the user’s past preferences.
Let’s develop a basic recommendation system by suggesting items that are most similar to a particular item, in this case, movies. It just tells what movies/items are most similar to the user’s movie choice.
Facebook
Twitterhttps://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
The global DIN Rail Rail-mounted Filter market is poised for substantial growth, projected to reach a market size of approximately $XXX million by 2033, expanding from an estimated $XXX million in 2025. This impressive trajectory is driven by a Compound Annual Growth Rate (CAGR) of XX% during the forecast period of 2025-2033. The increasing demand for reliable power quality and electromagnetic interference (EMI) suppression across various industries fuels this expansion. Key applications such as Communication, Medical Equipment, and Instrumentation are witnessing a significant surge in the adoption of these filters, essential for ensuring the smooth and accurate operation of sensitive electronic devices. The proliferation of industrial automation, smart grid technologies, and the continuous evolution of medical diagnostic equipment are major catalysts for this market's upward momentum. Furthermore, the growing emphasis on regulatory compliance for electromagnetic compatibility (EMC) across developed and developing economies is compelling manufacturers to integrate advanced filtration solutions, thereby boosting market penetration. The market is characterized by several key trends, including the development of compact and high-performance filters, miniaturization of electronic components, and the increasing integration of smart functionalities within filters for enhanced monitoring and control. While the market presents a robust growth outlook, certain restraints could temper its pace. The relatively high initial cost of advanced filter technologies and the availability of alternative, albeit less effective, filtering methods in some cost-sensitive applications may pose challenges. However, the long-term benefits of improved system reliability, reduced downtime, and enhanced product lifespan associated with DIN Rail Rail-mounted Filters are expected to outweigh these concerns. The market landscape is competitive, with prominent players like TDK Corporation, TE Connectivity, and Eaton actively innovating and expanding their product portfolios to cater to evolving industry needs. Asia Pacific is anticipated to emerge as a dominant region, driven by rapid industrialization and a burgeoning manufacturing sector, followed closely by North America and Europe, which benefit from established technological infrastructure and stringent regulatory frameworks. Here's a comprehensive report description for DIN Rail Rail-mounted Filters, incorporating your specified elements:
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Excell data set in which test bench is given. Input-output relation
Facebook
TwitterThese MATLAB files accompany the following publication:
Kulikova M.V., Tsyganova J.V. (2015) "Constructing numerically stable Kalman filter-based algorithms for gradient-based adaptive filtering", International Journal of Adaptive Control and Signal Processing, 29(11):1411-1426. DOI http://dx.doi.org/10.1002/acs.2552
The paper addresses the numerical aspects of adaptive filtering (AF) techniques for simultaneous state and parameters estimation (e.g. by the method of maximum likelihood). Here, we show that various square-root AF schemes can be derived from only two main theoretical results. These elegant and simple computational techniques replace the standard methodology based on direct differentiation of the conventional KF equations (with their inherent numerical instability) by advanced square-root filters (and its derivatives as well).
The codes have been presented here for their instructional value only. They have been tested with care but are not guaranteed to be free of error and, hence, they should not be relied on as the sole basis to solve problems.
If you use these codes in your research, please, cite to the corresponding article.
Facebook
Twitterhttps://dataful.in/terms-and-conditionshttps://dataful.in/terms-and-conditions
The dataset contains Year-, state- and region-wise compiled data on distribution of households (per thousand) by different Types of Filtering Methods such as boiling, electronic purification, chemical treatment with alum, chlorine, bleach, water filter, cloth, etc. used for treating different Types of Drinking Water, during the period of 1998 to 2018. The dataset has been compiled from Table No. 10 and Statement No. 8.1 of 54th and 76th reports of NSS.
Facebook
TwitterMany diagnostic datasets suffer from the adverse effects of spikes that are embedded in data and noise. For example, this is true for electrical power system data where the switches, relays, and inverters are major contributors to these effects. Spikes are mostly harmful to the analysis of data in that they throw off real-time detection of abnormal conditions, and classification of faults. Since noise and spikes are mixed together and embedded within the data, removal of the unwanted signals from the data is not always easy and may result in losing the integrity of the information carried by the data. Additionally, in some applications noise and spikes need to be filtered independently. The proposed algorithm is a multi-resolution filtering approach based on Haar wavelets that is capable of removing spikes while incurring insignificant damage to other data. In particular, noise in the data, which is a useful indicator that a sensor is healthy and not stuck, can be preserved using our approach. Presented here is the theoretical background with some examples from a realistic testbed.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset comprises sea surface height (SSH) and velocity data at the ocean surface in two small regions near the Agulhas retroflection. The unfiltered SSH and a horizontal velocity field are provided, along with the same fields after various kinds of filtering, as described in the accompanying manuscript, Using Lagrangian filtering to remove waves from the ocean surface velocity field (https://doi.org/10.31223/X5D352). The code repository for this work is https://github.com/cspencerjones/separating-balanced .
Two time-resolutions are provided: two weeks of hourly data and 70 days of daily data.
Seventy_daysA.nc contains daily data for region A and Seventy_daysB.nc contains daily data for region B, including unfiltered, lagrangian filtered and omega-filtered velocity and sea-surface height.
two_weeksA.nc contains hourly data for region A and two_weeksB.nc contains hourly data for region B, including unfiltered and lagrangian filtered velocity and sea-surface height.
Note that region A has been moved in version 2 of this dataset.
See the manuscript and code repository for more information.
This work was supported by NASA award 80NSSC20K1142.