Model-based prognostics approaches use domain knowledge about a system and its failure modes through the use of physics-based models. Model-based prognosis is generally divided into two sequential problems: a joint state-parameter estimation problem, in which, using the model, the health of a system or component is determined based on the observations; and a prediction problem, in which, using the model, the state-parameter distribution is simulated forward in time to compute end of life and remaining useful life. The first problem is typically solved through the use of a state observer, or filter. The choice of filter depends on the assumptions that may be made about the system, and on the desired algorithm performance. In this paper, we review three separate filters for the solution to the first problem: the Daum filter, an exact nonlinear filter; the unscented Kalman filter, which approximates nonlinearities through the use of a deterministic sampling method known as the unscented transform; and the particle filter, which approximates the state distribution using a finite set of discrete, weighted samples, called particles. Using a centrifugal pump as a case study, we conduct a number of simulation-based experiments investigating the performance of the different algorithms as applied to prognostics.
The algorithms in this data release implement a State-Space Model (SSM) of vertical infiltration through the unsaturated zone and recharge to the water table. These algorithms build on previous investigations available at https://doi.org/10.1029/2020WR029110 and https://doi.org/10.1111/gwat.13206. The SSM is defined by observed states (i.e., the water-table altitude) and unobserved states (i.e., fluxes through the unsaturated zone and recharge to the water table)and interprets time-series data for observations of water-table altitude and meteorological inputs (i.e., the liquid precipitation rate and the Potential Evapotranspiration (PET) rate). The algorithms first perform the estimation of the SSM parameters from the time-series data over a Parameter-Estimation Window (PEW). The estimated model parameters are then used in a subsequent State-Estimation Window (SEW) to estimate the observed and unobserved systems states of the SSM using the Kalman Filter (KF). The application of the KF to the SSM facilitates the assimilation of the recently available observations of the water-table altitude in the estimation of the observed and unobserved system states over the SEW. An additional outcome of applying the KF is the calculation of the time-varying error covariance of the system states over the SEW. The algorithms are used to demonstrate a comparison of the model outcomes for forecasting, filtering, and fixed-lag smoothing (FLS) using data for water-table altitude and meteorological inputs available from the Masser Recharge Site, which was operated by the U.S. Department of Agriculture, Agricultural Research Service. The algorithms were prepared and executed using the computational software MATLAB to meet the needs of the investigation presented in https://doi.org/10.1111/gwat.13349. MATLAB is a proprietary software, and thus, an executable version of the software cannot be supplied with this data release. The MATLAB files comprising the algorithms are included in this data release. The interested user would need to secure the appropriate versions of MATLAB and the associated MATLAB toolboxes. This USGS data release contains all of the input and output files for the simulations described in the associated journal article (https://doi.org/10.1111/gwat.13349).
IST-Rolla developed two nonlinear filters for spacecraft orbit determination during the Phase I contract. The theta-D filter and the cost based filter, CBF, were developed and used in various orbit determination scenarios. The scenarios were application to low Earth orbit, range only, and range and range rate estimation. The modified state observer was also developed to estimate uncertainty in the dynamic model besides estimation of orbital states.
Phase I research showed that there is a problem with the linear-like form that is used by many nonlinear filters such as the State Dependant Riccati Equation filter (SDRE filter), and the theta-D and CBF. A study of the observability led to important discoveries about the lack of observabilty in some formulations. Detailed study of the working of the proposed nonlinear filters in terms of observability and their application to more precise orbit determination and model uncertainty estimation will be undertaken in Phase II.
Also learned from Phase I, IST-Rolla will focus more on how and where these nonlinear filters can help NASA. There will be three main applications studied during Phase II: interplanetary orbit determination, space debris tracking, and interplanetary landing spacecraft tracking. These applications were chosen because of their relevance to current NASA missions and the nonlinearity of the measurements involved should show the need for the nonlinear filters. Furthermore, working algorithms and software will be given to NASA to test on ongoing applications.
Contains scans of a bin filled with different parts ( screws, nuts, rods, spheres, sprockets). For each part type, RGB image and organized 3D point cloud obtained with structured light sensor are provided. In addition, unorganized 3D point cloud representing an empty bin and a small Matlab script to read the files is also provided. 3D data contain a lot of outliers and the data were used to demonstrate a new filtering technique.
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
The two CSV files here are the train and test data in Kaggle's Ion Switching Competition with drift removed and filter with Kalman filter to reduce noise.
This ideas where posted by @cdeotte and @teejmahal20, I just run the filter and the FE and save the data.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset comprises sea surface height (SSH) and velocity data at the ocean surface in two small regions near the Agulhas retroflection. The unfiltered SSH and a horizontal velocity field are provided, along with the same fields after various kinds of filtering, as described in the accompanying manuscript, Using Lagrangian filtering to remove waves from the ocean surface velocity field (https://doi.org/10.31223/X5D352). The code repository for this work is https://github.com/cspencerjones/separating-balanced .
Two time-resolutions are provided: two weeks of hourly data and 70 days of daily data.
Seventy_daysA.nc contains daily data for region A and Seventy_daysB.nc contains daily data for region B, including unfiltered, lagrangian filtered and omega-filtered velocity and sea-surface height.
two_weeksA.nc contains hourly data for region A and two_weeksB.nc contains hourly data for region B, including unfiltered and lagrangian filtered velocity and sea-surface height.
Note that region A has been moved in version 2 of this dataset.
See the manuscript and code repository for more information.
This work was supported by NASA award 80NSSC20K1142.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In recent years, various real-time processing methods have been developed for Satellite Laser Ranging (SLR) data. However, the recognition rate of the single-stage Graz filtering algorithm for high-orbit satellites is less than 1%, and traditional two-stage filtering algorithms, such as polynomial fitting and iterative filtering techniques, exhibit high false and missed detection rates. These issues compromise the accuracy of laser positioning and real-time adjustments during observations. To address these problems, we propose a new, efficient real-time SLR data processing method. This algorithm combines single-stage filtering with a histogram-based approach and incorporates polynomial fitting to establish a predictive model. This offers the advantage of fast and efficient real-tim e signal recognition. The experimental results demonstrate that the proposed algorithm compensates for the limitations of single-stage filtering algorithms and performs better than traditional two-stage filtering algorithms in identifying medium- and high-orbit satellite signals. The false detection rate was reduced to below 15%, while achieving faster computation speeds. This method is convenience for researchers in their observations and offers new insights and directions for further research and application in the real-time identification of satellite laser ranging echo signals.
Particle filters (PF) have been established as the de facto state of the art in failure prognosis. They combine advantages of the rigors of Bayesian estimation to nonlinear prediction while also providing uncertainty estimates with a given solution. Within the context of particle filters, this paper introduces several novel methods for uncertainty representations and uncertainty management. The prediction uncertainty is modeled via a rescaled Epanechnikov kernel and is assisted with resampling techniques and regularization algorithms. Uncertainty management is accomplished through parametric adjustments in a feedback correction loop of the state model and its noise distributions. The correction loop provides the mechanism to incorporate information that can improve solution accuracy and reduce uncertainty bounds. In addition, this approach results in reduction in computational burden. The scheme is illustrated with real vibration feature data from a fatigue-driven fault in a critical aircraft component.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The computer programs implement the adaptive algorithms for real-time ECG signal filtering and numerical simulation for evaluation of filter effectiveness. The adaptive algorithms with complex use Hampel identifiers and Z-parameter are an author’s development. To launch a program, enter the name of the program and the ECG model signal in the command line. For example: ahzmthp.exe clean.txt. The test signal parameters (its length) and parameters of the filtering algorithms are read from the text file named as “filters.txt”. The program requests the additive and multiplicative noise variance, the probability and the amplitude of the spikes, and the number of realizations for statistical averaging of the calculated filter performance indicators. For example, via the “space” key enter: 0.0001 0 0 0 200, then press “Enter”. To apply filtering to a test signal which is read from a text file, select the menu item "Load from file" by pressing the key "6". The filter results are put in the “RESULT” subfolder. The filter efficiency estimates are written to the "MSE.res" and "SNR.res" output text files. The input signal has an extension “.x” (no noise), “.xn” (with simulated noise), “.xns” (with noise and spikes). The signals from filter algorithms outputs have the extension “.yf”. Also, files with functions of identifiers used to adapt the algorithm parameters to the local signal behavior and to the changes in the noise level and with adaptable filter parameters, and other intermediate signals are put to the “RESULT” subfolder. Input and output signals are presented in text format in the form of two columns, where the first column is the numbering of the samples, the second one is the corresponding values of the discrete signal. The first number in the first line is the signal length. To apply filtering to signals of different lengths, the first number that indicates signal length in the second line in the "filters.txt" file has to be changed accordingly. Correction of the filter parameters (in the "filters.txt" ) for ECGs with another sampling rate is recommended. The program was compiled by Free Pascal.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
RIP is a method for perference data filtering. The core idea is that low-quality input prompts lead to high variance and low-quality responses. By measuring the quality of rejected responses and the reward gap between chosen and rejected preference pairs, RIP effectively filters prompts to enhance dataset quality. We release 4k data that filtered from 20k Wildchat prompts. For each prompt, we provide 64 responses from Llama-3.1-8B-Instruct and their corresponding rewards obtained from ArmoRM.… See the full description on the dataset page: https://huggingface.co/datasets/facebook/Wildchat-RIP-Filtered-by-8b-Llama.
https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
In 2023, the global internet filter software market size was valued at approximately USD 2.5 billion. By 2032, it is projected to reach around USD 6.4 billion, growing at a compound annual growth rate (CAGR) of 10.9%. This impressive growth can be attributed to the increasing need for cybersecurity measures and parental control, as well as the rise in digital content consumption globally.
The surge in internet usage, especially among younger audiences, has propelled the demand for internet filter software. Parents and educational institutions are increasingly seeking solutions to monitor and restrict access to inappropriate content, ensuring a safe online environment. Additionally, with the proliferation of connected devices and the internet of things (IoT), the risk of cyber threats has escalated, driving enterprises and government sectors to adopt robust internet filtering solutions. These factors collectively contribute to the market's expansion.
Enterprises of all sizes are recognizing the importance of maintaining cybersecurity and productivity. Internet filter software helps in blocking access to non-work-related websites, reducing distractions and enhancing employee productivity. Moreover, with the rise in remote working and the adoption of hybrid work models, the need for effective internet filtering solutions has become more critical than ever. Companies are investing in advanced software to protect their networks from malicious activities and ensure compliance with organizational policies.
Technological advancements and innovations in filtering techniques are also playing a significant role in market growth. The development of AI and machine learning algorithms has enabled more sophisticated and accurate filtering capabilities, capable of dynamically adapting to new threats and evolving internet content. These advancements have broadened the scope of internet filter software applications, making them more robust and reliable, thus attracting a broader user base across various sectors.
Dns Filtering Software is becoming an integral part of the cybersecurity landscape, particularly as organizations seek to enhance their internet filter software capabilities. This type of software provides an additional layer of security by blocking access to malicious websites at the domain level, preventing potential threats before they can reach the network. As cyber threats become more sophisticated, the ability to filter DNS requests in real-time ensures that harmful content is intercepted early, reducing the risk of data breaches and other cyber incidents. The integration of Dns Filtering Software into existing security frameworks not only bolsters protection but also aids in compliance with regulatory requirements, making it a valuable tool for enterprises and institutions alike.
Regionally, North America currently dominates the market, driven by high internet penetration rates, stringent cybersecurity regulations, and a strong presence of key market players. However, the Asia Pacific region is anticipated to witness the highest growth rate during the forecast period, fueled by rapid digitalization, increasing internet users, and growing awareness about online safety and security. The expansion of the IT infrastructure and the rising adoption of cloud-based solutions further support the market growth in this region.
The internet filter software market by component is segmented into software and services. The software segment holds the largest market share, driven by the widespread adoption of filtering applications across various devices, including computers, smartphones, and tablets. Advanced software solutions offer features like real-time monitoring, customizable filtering settings, and comprehensive reporting, making them essential tools for both personal and professional use. Continuous upgrades and new software releases also contribute to the segment's dominance.
Within the software segment, standalone applications and integrated software solutions are notable sub-categories. Standalone applications, often used in residential settings, provide dedicated filtering capabilities without needing integration with other systems. In contrast, integrated software solutions are more common in enterprises and educational institutions, where they work in conjunction with existing IT infrastructures to offer seamless protection across multiple
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In recent years, various real-time processing methods have been developed for Satellite Laser Ranging (SLR) data. However, the recognition rate of the single-stage Graz filtering algorithm for high-orbit satellites is less than 1%, and traditional two-stage filtering algorithms, such as polynomial fitting and iterative filtering techniques, exhibit high false and missed detection rates. These issues compromise the accuracy of laser positioning and real-time adjustments during observations. To address these problems, we propose a new, efficient real-time SLR data processing method. This algorithm combines single-stage filtering with a histogram-based approach and incorporates polynomial fitting to establish a predictive model. This offers the advantage of fast and efficient real-tim e signal recognition. The experimental results demonstrate that the proposed algorithm compensates for the limitations of single-stage filtering algorithms and performs better than traditional two-stage filtering algorithms in identifying medium- and high-orbit satellite signals. The false detection rate was reduced to below 15%, while achieving faster computation speeds. This method is convenience for researchers in their observations and offers new insights and directions for further research and application in the real-time identification of satellite laser ranging echo signals.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is about book subjects. It has 4 rows and is filtered where the books is An introduction to wavelets and other filtering methods in finance and economics. It features 10 columns including number of authors, number of books, earliest publication date, and latest publication date.
This archive contains the necessary files to replicate the experiment in Ordering Classifier Chains using filter model feature selection techniques, including: Implementations of the Forward-Oriented Chain Selection (FOCS) and Backward-Oriented Chain Selection (BOCS) algorithms. The code used to perform the experiment. All of the datasets used in the experiment.
In this paper, we propose a novel approach to reduce the noise in Synthetic Aperture Radar (SAR) images using particle filters. Interpretation of SAR images is a difficult problem, since they are contaminated with a multiplicative noise, which is known as the “Speckle Noise”. In literature, the general approach for removing the speckle is to use the local statistics, which are computed in a square window. Here, we propose to use particle filters, which is a sequential Bayesian technique. The proposed method also uses the local statistics to denoise the images. Since this is a Bayesian approach, the computed statistics of the window can be exploited as a priori information. Moreover, particle filters are sequential methods, which are more appropriate to handle the heterogeneous structure of the image. Computer simulations show that the proposed method provides better edge-preserving results with satisfactory speckle removal, when compared to the results obtained by Gamma Maximum a posteriori (MAP) filter.
https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
The global filter market size is valued at USD 67,700 million in 2025 and is projected to reach USD 98,900 million by 2033, exhibiting a CAGR of 4.0% during the forecast period. The market growth is primarily attributed to the increasing adoption of filters in various applications, such as communication, electronic countermeasures, and radar systems. The rise in demand for advanced filtering technologies for improved signal processing and noise reduction in electronic devices is also contributing to market growth. Key market drivers include the growing need for improved signal processing in communication systems, increasing adoption of radar technology in various industries, and the rising demand for electronic countermeasures in defense and aerospace applications. However, factors such as the high cost of advanced filters and the availability of alternative noise reduction techniques may restrain market growth to some extent. The increasing demand for filters in emerging markets, advancements in filter design and manufacturing technologies, and the growing popularity of IoT and connected devices are expected to offer significant growth opportunities for the market.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In this brief
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Phylogenetic inference is generally performed on the basis of multiple sequence alignments (MSA). Because errors in an alignment can lead to errors in tree estimation, there is a strong interest in identifying and removing unreliable parts of the alignment. In recent years several automated filtering approaches have been proposed, but despite their popularity, a systematic and comprehensive comparison of different alignment filtering methods on real data has been lacking. Here, we extend and apply recently introduced phylogenetic tests of alignment accuracy on a large number of gene families and contrast the performance of unfiltered versus filtered alignments in the context of single-gene phylogeny reconstruction. Based on multiple genome-wide empirical and simulated data sets, we show that the trees obtained from filtered MSAs are on average worse than those obtained from unfiltered MSAs. Furthermore, alignment filtering often leads to an increase in the proportion of well-supported branches that are actually wrong. We confirm that our findings hold for a wide range of parameters and methods. Although our results suggest that light filtering (up to 20% of alignment positions) has little impact on tree accuracy and may save some computation time, contrary to widespread practice, we do not generally recommend the use of current alignment filtering methods for phylogenetic inference. By providing a way to rigorously and systematically measure the impact of filtering on alignments, the methodology set forth here will guide the development of better filtering algorithms.
https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
The Product Recommendation System market is experiencing robust growth, projected to reach $6.88 billion in 2025 and exhibiting a remarkable Compound Annual Growth Rate (CAGR) of 33.06%. This expansion is fueled by the increasing adoption of e-commerce, the need for personalized customer experiences, and the rising availability of sophisticated data analytics tools. Key drivers include the growing preference for online shopping, the need to enhance customer engagement and loyalty, and the ability of recommendation systems to improve conversion rates and average order values. The market is segmented by deployment mode (on-premise and cloud), filtering techniques (collaborative, content-based, hybrid), and end-user industry (IT & Telecom, BFSI, Retail, Media & Entertainment, Healthcare). The cloud deployment model is gaining significant traction due to its scalability, flexibility, and cost-effectiveness. Hybrid recommendation systems, combining collaborative and content-based approaches, are also witnessing increased adoption for achieving a balance between personalization and efficiency. Major players like Amazon, Netflix, Salesforce, and Google are driving innovation and market competition, constantly improving algorithm accuracy and integrating AI-powered features. The competitive landscape is characterized by both established technology giants and specialized recommendation engine providers. Future growth will likely be driven by advancements in artificial intelligence (AI), machine learning (ML), and natural language processing (NLP) technologies, enabling more accurate and personalized recommendations. The North American market currently holds a significant share, followed by Europe and Asia Pacific. However, the Asia Pacific region is anticipated to witness the fastest growth rate due to increasing internet penetration, rising smartphone usage, and a burgeoning e-commerce sector. While data privacy regulations and the potential for biased recommendations pose challenges, the overall market outlook remains highly positive, driven by ongoing technological advancements and the growing demand for personalized experiences across diverse industries. The market's growth trajectory signifies the crucial role product recommendation systems play in optimizing online retail experiences and enhancing customer satisfaction across multiple sectors. This ongoing expansion highlights the importance of continuous innovation and adaptation within this dynamic landscape. Recent developments include: January 2023 - Coveo Solutions Inc. opened a new office in London, England, to assist growth in Europe. The new office will serve clients in Europe, such as Philips, SWIFT, Vestas, Nestlé, Kurt Geiger, River Island, MandM Direct, Halfords, and Healthspan, which have chosen Coveo AI to improve the experiences of their customers, employees, and workplace. Coveo also collaborated with system integrators, referral partners, and strategic partners in other regions to offer search, personalization, recommendations, and merchandising to major corporations that want to significantly raise customer satisfaction, employee productivity, and overall profitability., August 2022 - Google announced plans to open three new Google Cloud regions in Malaysia, Thailand, and New Zealand, in addition to the six previously announced regions in Berlin, Dammam, Doha, Mexico, Tel Aviv, and Turin.. Key drivers for this market are: Increasing Demand for the Customization of Digital Commerce Experience Across Mobile and Web, Growing Adoption by Retailers for Controlling Merchandising and Inventory Rules. Potential restraints include: Complexity Regarding Incorrect Labeling Due to Changing User Preferences. Notable trends are: Increasing Demand for Customization of Digital Commerce Experience Across Mobile and Web Drives the Market's Growth.
The data include atmospheric-loading frequency response functions (table 1) and filtered detrended and reconstructed (trends restored) groundwater levels (tables 2–4) computed for selected, parsed time series for three USGS monitoring wells [BR–1 (USGS site 410233104093203); LN–1 (USGS site 410233104093202); and FH–1 (USGS site 410233104093201)], and the associated hourly resampled water-level and barometric-pressure time-series "pieces" (tables 2–4) used to create the parsed series. Table headings are defined in the Data Dictionary. Digital filters were developed based on the computed water-level response to Earth tides and barometric pressure in all three wells, and these filters were used to compute the filtered water-level time series in tables 2, 3 and 4 for wells BR–1, LN–1, and FH–1, respectively. The content of tables (1–4) and the development of the digital filters and filtering techniques are described in the associated publication, https://doi.org/10.3133/sir20215020.
Model-based prognostics approaches use domain knowledge about a system and its failure modes through the use of physics-based models. Model-based prognosis is generally divided into two sequential problems: a joint state-parameter estimation problem, in which, using the model, the health of a system or component is determined based on the observations; and a prediction problem, in which, using the model, the state-parameter distribution is simulated forward in time to compute end of life and remaining useful life. The first problem is typically solved through the use of a state observer, or filter. The choice of filter depends on the assumptions that may be made about the system, and on the desired algorithm performance. In this paper, we review three separate filters for the solution to the first problem: the Daum filter, an exact nonlinear filter; the unscented Kalman filter, which approximates nonlinearities through the use of a deterministic sampling method known as the unscented transform; and the particle filter, which approximates the state distribution using a finite set of discrete, weighted samples, called particles. Using a centrifugal pump as a case study, we conduct a number of simulation-based experiments investigating the performance of the different algorithms as applied to prognostics.