Facebook
TwitterContains scans of a bin filled with different parts ( screws, nuts, rods, spheres, sprockets). For each part type, RGB image and organized 3D point cloud obtained with structured light sensor are provided. In addition, unorganized 3D point cloud representing an empty bin and a small Matlab script to read the files is also provided. 3D data contain a lot of outliers and the data were used to demonstrate a new filtering technique.
Facebook
TwitterFilter is a configurable app template that displays a map with an interactive filtered view of one or more feature layers. The application displays prompts and hints for attribute filter values which are used to locate specific features.Use CasesFilter displays an interactive dialog box for exploring the distribution of a single attribute or the relationship between different attributes. This is a good choice when you want to understand the distribution of different types of features within a layer, or create an experience where you can gain deeper insight into how the interaction of different variables affect the resulting map content.Configurable OptionsFilter can present a web map and be configured with the following options:Choose the web map used in the application.Provide a title and color theme. The default title is the web map name.Configure the ability for feature and location search.Define the filter experince and provide text to encourage user exploration of data by displaying additional values to choose as the filter text.Supported DevicesThis application is responsively designed to support use in browsers on desktops, mobile phones, and tablets.Data RequirementsRequires at least one layer with an interactive filter. See Apply Filters help topic for more details.Get Started This application can be created in the following ways:Click the Create a Web App button on this pageShare a map and choose to Create a Web AppOn the Content page, click Create - App - From Template Click the Download button to access the source code. Do this if you want to host the app on your own server and optionally customize it to add features or change styling.
Facebook
TwitterA data science project's primary objective is to analyze and train the data in preparation for the relevant machine learning project. Gathering the necessary data from the beauty domain is a crucial step to provide accurate results for the machine learning project. To ensure that the data gathered is sufficient and relevant, it is vital to identify the appropriate data sources and analyze them. Homemade remedy recipes are becoming increasingly popular around the world. There are numerous remedy recipe videos available on YouTube and Google. The information provided above is required to recommend a remedy based on the conditions. The data set contains 18 different types of skin conditions that were identified by the user through surveys.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Blockchain data query: base function selector filtering
Facebook
Twitterhttps://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
The global web content filtering market size was valued at approximately USD 3.5 billion in 2023 and is projected to reach about USD 8.6 billion by 2032, growing at a CAGR of 10.7% during the forecast period. This robust growth is primarily driven by the increasing need for sophisticated content control mechanisms to protect against online threats and ensure compliance with organizational policies. The surge in internet usage, coupled with the escalating threat of cyber-attacks and malware, has necessitated the deployment of advanced web filtering technologies across various sectors. As enterprises continue to digitalize their operations, the demand for effective web content filtering solutions is anticipated to witness substantial growth.
One of the primary growth factors in the web content filtering market is the rising awareness and concern over cybersecurity threats. With businesses and individuals increasingly relying on the internet for day-to-day operations, the risk of exposure to inappropriate or harmful content has become more pronounced. Organizations are investing heavily in web content filtering solutions to safeguard their networks from malware, phishing attacks, and other cyber threats. Moreover, the adoption of remote working models has further accentuated the need for robust web content controls to ensure that employees access only secure and relevant online resources while working outside the secure corporate network.
Another significant growth driver is the regulatory landscape compelling organizations to implement stringent web filtering mechanisms. Various governments and regulatory bodies worldwide have introduced laws mandating organizations to keep their digital environments secure, thereby boosting the demand for web content filtering solutions. For instance, the General Data Protection Regulation (GDPR) in Europe and the Children's Internet Protection Act (CIPA) in the United States require entities to employ measures that prevent access to inappropriate content, especially in sectors such as education and healthcare. Compliance with these regulations is not only a legal obligation but also a trust-building measure with consumers, driving market growth.
Technological advancements are also playing a pivotal role in propelling the web content filtering market. The integration of artificial intelligence and machine learning into web content filtering solutions has significantly enhanced their effectiveness and efficiency. These technologies enable real-time content analysis and adaptive filtering, ensuring that only relevant and safe content is accessible. Furthermore, the rise of cloud-based filtering solutions offers scalability and flexibility, making them particularly attractive to small and medium enterprises (SMEs) that may not have the resources for extensive on-premise solutions. As technology continues to evolve, it will likely spur further innovation in content filtering solutions, providing enhanced security and user experience.
Content-control Software plays a crucial role in the web content filtering landscape, offering organizations the ability to manage and restrict access to online content based on predefined policies. This software is essential for businesses aiming to protect their networks from harmful content and ensure compliance with industry regulations. By implementing content-control software, companies can effectively monitor and filter web traffic, preventing access to inappropriate or malicious websites. This not only enhances security but also boosts productivity by minimizing distractions and ensuring that employees focus on work-related tasks. As cyber threats continue to evolve, the demand for sophisticated content-control software is expected to rise, driving innovation and growth in the market.
Regionally, North America holds a significant share of the web content filtering market, attributed primarily to the region's technological advancements and high adoption rate of cybersecurity solutions. Europe follows closely, driven by stringent data privacy regulations and a strong emphasis on digital security. Meanwhile, the Asia Pacific region is expected to register the highest growth rate, fueled by increasing internet penetration, rising cyber threats, and growing awareness among businesses regarding the importance of cybersecurity. Emerging economies in this region are witnessing rapid digital transformation, which is expected to create lucrative opportunities for market players in the coming years.
<br /&g
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This paper presents a method of filtering point clouds generated by laser scanning, to obtain a Digital Terrain Model (DTM). The filtering process is performed based on an approximated surface obtained from urban road points. These points are sampled in straight lines detected by Steger in the intensity image of the laser pulse. The main assumption of the method is that the ground has smooth behavior inside the block, so the sample laser points collected along the urban roads allow, using the kriging interpolation method, a suitable representation of the land inside the block, that is, relatively close to the ground point laser in these regions. Thus, filtering is performed by proximity of the original cloud laser points with approximate surface. For thus a DTM is obtained from the new sample by kriging interpolation method, increasing the description of the surface. From the experiments it was possible to verify the feasibility of the proposed method, with results of good visual consistency and satisfactory numerical indicators.
Facebook
Twitter
According to our latest research, the global Copyright Filter for Training Data market size in 2024 stands at USD 1.34 billion, reflecting the rapidly growing need for robust copyright protection in AI training ecosystems. The market is experiencing a strong CAGR of 18.1% from 2025 to 2033, with the forecasted market size reaching USD 5.59 billion by 2033. This growth is primarily driven by increasing regulatory scrutiny, the proliferation of generative AI models, and the escalating risk of copyright infringement in large-scale data curation processes.
The primary growth factor propelling the Copyright Filter for Training Data market is the exponential rise in AI-driven applications and the subsequent surge in demand for high-quality, legally compliant training datasets. As AI models become more sophisticated and are adopted across diverse industries, the volume and complexity of training data have increased significantly. This has amplified concerns regarding the unauthorized use of copyrighted content, prompting organizations to invest in advanced copyright filtering solutions. These tools not only mitigate legal risks but also enhance the integrity and ethical standards of AI model development, thereby fostering trust among stakeholders and end-users.
Another crucial driver is the evolving regulatory landscape, particularly in regions such as North America and Europe, where governments are enacting stringent data governance and copyright protection laws. The implementation of frameworks like the EU’s Digital Services Act and the U.S. Copyright Office’s guidelines for AI-generated content has necessitated the integration of automated copyright filters in the data preparation pipeline. Companies are increasingly prioritizing compliance to avoid costly litigation and reputational damage, fueling the adoption of both software and service-based copyright filtering solutions. This regulatory push is expected to intensify over the forecast period, further accelerating market expansion.
Furthermore, the proliferation of digital content and the democratization of data annotation have created new challenges for content moderation and copyright management. With the advent of user-generated content platforms, digital publishing, and the widespread use of third-party datasets, the risk of inadvertently incorporating copyrighted material into AI training sets has grown. This has prompted technology providers to innovate and develop more sophisticated, AI-powered copyright detection algorithms capable of handling diverse data formats and languages. The integration of machine learning and natural language processing capabilities into copyright filters has significantly improved their accuracy and scalability, making them indispensable tools in the AI development lifecycle.
Regionally, North America continues to dominate the Copyright Filter for Training Data market, accounting for the largest revenue share in 2024, followed closely by Europe and the Asia Pacific. The market’s robust growth in North America is attributed to the presence of leading technology companies, a mature legal framework, and high awareness regarding copyright compliance. Europe’s market is bolstered by strong regulatory mandates, while Asia Pacific is witnessing rapid adoption due to its burgeoning AI ecosystem and increasing investments in digital infrastructure. Latin America and the Middle East & Africa are emerging markets, showing steady growth as awareness and regulatory frameworks mature.
The Copyright Filter for Training Data market by component is segmented into software and services, both of which play pivotal roles in ensuring copyright compliance throughout the AI model development process. The software segment, comprising standalone copyright detection platforms and integrated modules within data management suites, dominates the market in 2024. These software solutions leverage advanced machine learning algorithms, natural langu
Facebook
Twitterhttps://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Functional trait-based approaches are increasingly used for studying the processes underlying community assembly. The relative influence of different assembly rules might depend on the spatial scale of analysis, the environmental context and the type of functional traits considered. By using a functional trait-based approach, we aim to disentangle the relative role of environmental filtering and interspecific competition on the structure of European ant communities according to the spatial scale and the type of trait considered. We used a large database on ant species composition that encompasses 361 ant communities distributed across the five biogeographic regions of Europe; these communities were composed of 155 ant species, which were characterized by 6 functional traits. We then analysed the relationship between functional divergence and co-occurrence between species pairs across different spatial scales (European, biogeographic region and local) and considering different types of traits (ecological tolerance and niche traits). Three different patterns emerged: negative, positive and non-significant regression coefficients suggest that environmental filtering, competition and neutrality are at work, respectively. We found that environmental filtering is important for structuring European ant communities at large spatial scales, particularly at the scale of Europe and most biogeographic regions. Competition could play a certain role at intermediate spatial scales where temperatures are more favourable for ant productivity (i.e. the Mediterranean region), while neutrality might be especially relevant in spatially discontinuous regions (i.e. the Alpine region). We found that no ecological mechanism (environmental filtering or competition) prevails at the local scale. The type of trait is especially important when looking for different assembly rules, and multi-trait grouping works well for traits associated with environmental responses (tolerance traits), but not for traits related to resource exploitation (niche traits). The spatial scale of analysis, the environmental context and the chosen traits merit special attention in trait-based analyses of community assembly mechanisms.
Methods Data analyses
Different trait-based approaches have been used to distinguish the stochastic and deterministic (environmental vs. biotic filtering) processes that structure biotic communities. The approach we use can disentangle the role of environmental filtering and competitive exclusion by analysing the relationship between species pair co-occurrence and functional dissimilarity (2). From this analysis, three different patterns might emerge. First, if species with similar functional traits co-occur more often than expected by chance, the relationship between co-occurrence and functional dissimilarity of pairs of species will be significant and negative (i.e. environmental filtering process). Contrary to this, if species with divergent traits co-occur more often than expected at random, the relationship will be significant and positive (i.e. competitive exclusion process). Finally, non-significant relationships between co-occurrence and functional dissimilarity of species pairs are also possible (i.e. neutral theory processes). This would be the case where species co-occur independently of their functional similarity, or alternatively, if environmental filtering and competition exclusion are simultaneously at work with similar contributions. Here, we assume that two species co-occur when they occur spatially in the same community, although they might not share the same foraging time.
The co-occurrence index for each species pair was calculated within each species x site (European and regional scales) and species x bait (local scale) matrix. Data for the co-occurrence analyses consist in binary presence-absence matrices, where each row was a species, each column a site (or a bait), and the entries were presence (1) or absence (0) of a species in a site or a bait. Pairwise co-occurrence was calculated using the Jaccard index of similarity (JIab) for each pair of species in each matrix (47):
JIab=AB÷A+B+AB
where A and B are the number of sites where only species a and species b occur, respectively, and AB the number of sites where species a and b co-occur. The Jaccard similarity index takes values between 0 and 1, where 0 means that the two species are never found in the same site, and in our case, that co-occurrence is null; while 1 indicates that the two species are always together, and in our case, that the co-occurrence is total.
In order to measure functional dissimilarity between species pairs, we computed Gower’s dissimilarity between two species based on each functional trait separately, pooling traits according to whether they are ‘ecological tolerance’ or ‘ecological niche’ traits, and pooling all traits together. We used Gower’s dissimilarity, so that we would be able to deal with quantitative and qualitative traits (48). To compute it, we used a functional matrix where rows were species, columns were traits, and cell values were the trait values. Since Gower's dissimilarity depends on the number of species in the matrix, it was only calculated for each pair of species with data from the largest scale (Europe) where the number of species is highest. For each pair of species, nine functional dissimilarities were calculated: one with all functional traits together; one with only the ecological niche traits; one with the ecological tolerance traits; and one for each of the six traits separately. For these computations we used the ‘vegan’ (49) and ‘cluster’ (50) packages in R software v. 3.2.2 (51).
The relationship between the functional dissimilarity and the co-occurrence index between species pairs was tested by using linear models. Given the large number of zeros in the co-occurrence index and failure to meet the normal assumptions, we carried out the analyses in two steps. First, we transformed the co-occurrence index into a binary variable indicating whether or not there was occurrence of the pair of species in each matrix. We used a generalized linear model with a binomial distribution and a logit link function to perform the analysis (hereafter, binary co-occurrence analysis). In a second step, we applied a general linear model to make the model with the co-occurrence index where the pair of species occur at least once in the matrix (hereafter, co-occurrence strength analysis). In this case, the co-occurrence index was log-transformed to satisfy normality assumptions. We performed 18 analyses at the European scale (nine analyses for binary occurrence matrices and nine for co-occurrence strength matrices, these last nine comprising one analysis with all traits together, two analyses corresponding to each group of traits, and six analyses corresponding to each trait separately), 90 analyses at the biogeographic scale (forty-five for binary occurrence matrices and forty-five for occurrence strength matrices, of which nine analyses corresponded to each of the five biogeographic regions), and 333 analyses at the local scale (117 for binary occurrence matrices and 216 for co-occurrence strength matrices, comprising 37 analyses with all traits together, 37 for each group of traits and 222 for each singular trait). It is worth noting that binary co-occurrence analyses were only performed in locations where more than five pairs of species showed values of co-occurrence=0. Generalized and general linear models were conducted using the ‘stats’ package in R.
Facebook
TwitterWe seek to mitigate the challenges with web-scraped and off-the-shelf POI data, and provide tailored, complete, and manually verified datasets with Geolancer. Our goal is to help represent the physical world accurately for applications and services dependent on precise POI data, and offer a reliable basis for geospatial analysis and intelligence.
Our POI database is powered by our proprietary POI collection and verification platform, Geolancer, which provides manually verified, authentic, accurate, and up-to-date POI datasets.
Enrich your geospatial applications with a contextual layer of comprehensive and actionable information on landmarks, key features, business areas, and many more granular, on-demand attributes. We offer on-demand data collection and verification services that fit unique use cases and business requirements. Using our advanced data acquisition techniques, we build and offer tailormade POI datasets. Combined with our expertise in location data solutions, we can be a holistic data partner for our customers.
KEY FEATURES - Our proprietary, industry-leading manual verification platform Geolancer delivers up-to-date, authentic data points
POI-as-a-Service with on-demand verification and collection in 170+ countries leveraging our network of 1M+ contributors
Customise your feed by specific refresh rate, location, country, category, and brand based on your specific needs
Data Noise Filtering Algorithms normalise and de-dupe POI data that is ready for analysis with minimal preparation
DATA QUALITY
Quadrant’s POI data are manually collected and verified by Geolancers. Our network of freelancers, maps cities and neighborhoods adding and updating POIs on our proprietary app Geolancer on their smartphone. Compared to other methods, this process guarantees accuracy and promises a healthy stream of POI data. This method of data collection also steers clear of infringement on users’ privacy and sale of their location data. These purpose-built apps do not store, collect, or share any data other than the physical location (without tying context back to an actual human being and their mobile device).
USE CASES
The main goal of POI data is to identify a place of interest, establish its accurate location, and help businesses understand the happenings around that place to make better, well-informed decisions. POI can be essential in assessing competition, improving operational efficiency, planning the expansion of your business, and more.
It can be used by businesses to power their apps and platforms for last-mile delivery, navigation, mapping, logistics, and more. Combined with mobility data, POI data can be employed by retail outlets to monitor traffic to one of their sites or of their competitors. Logistics businesses can save costs and improve customer experience with accurate address data. Real estate companies use POI data for site selection and project planning based on market potential. Governments can use POI data to enforce regulations, monitor public health and well-being, plan public infrastructure and services, and more. A few common and widespread use cases of POI data are:
ABOUT GEOLANCER
Quadrant's POI-as-a-Service is powered by Geolancer, our industry-leading manual verification project. Geolancers, equipped with a smartphone running our proprietary app, manually add and verify POI data points, ensuring accuracy and authenticity. Geolancer helps data buyers acquire data with the update frequency suited for their specific use case.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In this study, we compared the LiDAR filtering performances of unsupervised machine learning methods, such as linkage, K-means, and self-organizing maps, for urban areas to provide a practical guide to researchers. The input parameters (x-y-z and intensity) were normalized and weighted using a chi-squared independence test to improve the classification accuracy. The best successful results were obtained using the weighted linkage method in terms of the total error of 13.53%, 3.96%, and 1.07% for the three samples, respectively. In comparison with other approaches, methods weighted by chi-squared have significant potential for classification and filtering and outperform many popular approaches.
Facebook
Twitterhttps://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
According to our latest research, the global active harmonic filter for data centers market size reached USD 1.21 billion in 2024, reflecting the growing importance of power quality management in mission-critical environments. The market is expected to witness a robust CAGR of 7.8% from 2025 to 2033, with the market size projected to reach USD 2.38 billion by 2033. This growth is primarily driven by the increasing deployment of high-density computing infrastructure, the proliferation of hyperscale and edge data centers, and the rising demand for uninterrupted power supply with minimal electrical disturbances.
One of the primary growth factors for the active harmonic filter for data centers market is the exponential increase in digital transformation initiatives across industries. As enterprises migrate to cloud-based platforms and deploy artificial intelligence, machine learning, and big data analytics, the power demands and the complexity of data center operations have surged. These trends have led to the integration of more sophisticated electrical and electronic equipment, which inherently generate higher levels of harmonic distortion. Active harmonic filters are being rapidly adopted to mitigate these distortions, ensuring stable and reliable power quality, and thus preventing equipment malfunctions, downtime, and costly repairs. The regulatory push for energy efficiency and the need for compliance with international power quality standards such as IEEE 519 further accelerate the adoption of advanced harmonic filtering solutions in data centers worldwide.
Another significant driver is the rapid expansion of hyperscale and colocation data centers. With the global data sphere expected to double every two years, data center operators are under pressure to scale up their infrastructure while maintaining operational efficiency and sustainability. The integration of renewable energy sources and sophisticated uninterruptible power supply (UPS) systems has made power management more complex, increasing the risk of harmonic-related issues. Active harmonic filters play a crucial role in these environments by dynamically compensating for harmonic currents, thereby enhancing the lifespan of sensitive electronic components and improving overall system reliability. The market is also benefiting from technological advancements such as IoT-enabled monitoring, real-time analytics, and modular filter designs, which offer scalability and ease of integration into existing power systems.
Furthermore, the growing awareness of the financial and reputational risks associated with power quality issues has prompted data center operators to invest proactively in advanced power conditioning solutions. Harmonics-related disturbances can lead to overheating, increased energy losses, and premature failure of critical equipment such as servers, cooling systems, and networking devices. As data centers strive to achieve higher uptime and service-level agreements (SLAs), the deployment of active harmonic filters has emerged as a best practice for ensuring uninterrupted operations. The rise of edge computing and the deployment of micro data centers in remote or challenging environments further underscore the need for compact, efficient, and reliable harmonic mitigation solutions.
From a regional perspective, North America remains the dominant market for active harmonic filters in data centers, owing to its mature IT infrastructure, high concentration of hyperscale data centers, and stringent regulatory standards. However, Asia Pacific is witnessing the fastest growth, driven by rapid digitalization, increasing investments in cloud infrastructure, and government initiatives to promote smart cities and Industry 4.0. Europe is also a significant market, characterized by a strong focus on energy efficiency and sustainability. Latin America and the Middle East & Africa are emerging as promising markets, supported by growing data center investments and the expansion of telecom networks. The competitive landscape is shaped by both global players and regional specialists, with innovation, product differentiation, and after-sales support emerging as key success factors.
The active harmonic filter for data centers market is segmented by product type into Shunt Active Harmonic Filters, Series Active Harmonic Filters, and Hybrid Active Harmonic Filters. Shunt active har
Facebook
TwitterR was used for the pipeline. All R code is provided for the creation of simulated datasets and filtering of those datasets.
We've also provide .012 data input files (.txt) with their env files (.env) and the outputs of baypass (.csv) and lfmm (calpval).
The name of the outputs look like this: emsim_156_6_0.5_0.1.txt.lfmm_env_2.calpval This naming convention is the same throughout.
emsim = name of the datastet E. microcarpa simulation
156 = # of individuals i.e., sample size
6 = number of individuals per population
0.5 = the missing data threshold (note, for coding purposes this is actually the % of data kept : 10% missing data will be 0.9) (one of 0.5, 0.6, 0.7 0.8, or 0.9)
0.1 = minor allele frequency (one of 0.1, 0.05, or 0.01)
Associated SNPs
V#####MT - SNPs associated with BIO5
V#####MP - SNPs associated with BIO14
Facebook
TwitterTool: Microsoft Excel
Dataset: Coffee Sales
Process: 1. Data Cleaning: • Remove duplicates and blanks. • Standardize date and currency formats.
Data Manipulation:
• Sorting and filtering function to work
with interest subsets of data.
• Use XLOOKUP, INDEX-MATCH and IF
formula for efficient data manipulation,
such as retrieving, matching and
organising information in spreadsheets
Data Analysis: • Create Pivot Tables and Pivot Charts with the formatting to visualize trends.
Dashboard Development: • Insert Slicers with the formatting for easy filtering and dynamic updates.
Highlights: This project aims to understand coffee sales trends by country, roast type, and year, which could help identify marketing opportunities and customer segments.
Facebook
TwitterNext generation sequencing (NGS) technologies generate huge amounts of sequencing data. Several microbial genome projects, in particular fungal whole genome sequencing, have used NGS techniques, because of their cost efficiency. However, NGS techniques also demand for computational tools to process and analyze massive datasets. Implementation of few data processing steps, including quality and length filters, often leads to a remarkable improvement in the accuracy and quality of data analyses. Choosing appropriate parameters for this purpose is not always straightforward, as these will vary with the dataset. In this study we present the FastQFS (Fastq Quality Filtering and Statistics) tool, which can be used for both read filtering and filtering parameters assessment. There are several tools available, but an important asset of FastQFS is that it provides the information of filtering parameters that fit best to the raw dataset, prior to computationally expensive filtering. It generates statistics of reads meeting different quality and length thresholds, and also the expected coverage depth of the genome which would be left after applying different filtering parameters. The FastQFS tool will help researchers to make informed decisions on NGS reads filtering parameters, avoiding time-consuming optimization of filtering criteria.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Replication files for survey experiments and observational analysis of Russian mayoral candidates
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Author: Andrew J. Felton
Date: 10/29/2024
This R project contains the primary code and data (following pre-processing in python) used for data production, manipulation, visualization, and analysis, and figure production for the study entitled:
"Global estimates of the storage and transit time of water through vegetation"
Please note that 'turnover' and 'transit' are used interchangeably. Also please note that this R project has been updated multiple times as the analysis has updated.
Data information:
The data folder contains key data sets used for analysis. In particular:
"data/turnover_from_python/updated/august_2024_lc/" contains the core datasets used in this study including global arrays summarizing five year (2016-2020) averages of mean (annual) and minimum (monthly) transit time, storage, canopy transpiration, and number of months of data able as both an array (.nc) or data table (.csv). These data were produced in python using the python scripts found in the "supporting_code" folder. The remaining files in the "data" and "data/supporting_data"" folder primarily contain ground-based estimates of storage and transit found in public databases or through a literature search, but have been extensively processed and filtered here. The "supporting_data"" folder also contains annual (2016-2020) MODIS land cover data used in the analysis and contains separate filters containing the original data (.hdf) and then the final process (filtered) data in .nc format. The resulting annual land cover distributions were used in the pre-processing of data in python.
#Code information
Python scripts can be found in the "supporting_code" folder.
Each R script in this project has a role:
"01_start.R": This script sets the working directory, loads in the tidyverse package (the remaining packages in this project are called using the `::` operator), and can run two other scripts: one that loads the customized functions (02_functions.R) and one for importing and processing the key dataset for this analysis (03_import_data.R).
"02_functions.R": This script contains custom functions. Load this using the
`source()` function in the 01_start.R script.
"03_import_data.R": This script imports and processes the .csv transit data. It joins the mean (annual) transit time data with the minimum (monthly) transit data to generate one dataset for analysis: annual_turnover_2. Load this using the
`source()` function in the 01_start.R script.
"04_figures_tables.R": This is the main workhouse for figure/table production and
supporting analyses. This script generates the key figures and summary statistics
used in the study that then get saved in the manuscript_figures folder. Note that all
maps were produced using Python code found in the "supporting_code"" folder.
"supporting_generate_data.R": This script processes supporting data used in the analysis, primarily the varying ground-based datasets of leaf water content.
"supporting_process_land_cover.R": This takes annual MODIS land cover distributions and processes them through a multi-step filtering process so that they can be used in preprocessing of datasets in python.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Brazil Sales: General Use: Others nes: Apparatus for Filtering or Purifying Liquids data was reported at 315,557.283 BRL th in 2017. This records a decrease from the previous number of 503,589.053 BRL th for 2016. Brazil Sales: General Use: Others nes: Apparatus for Filtering or Purifying Liquids data is updated yearly, averaging 503,589.053 BRL th from Dec 2005 (Median) to 2017, with 13 observations. The data reached an all-time high of 741,897.000 BRL th in 2011 and a record low of 269,439.000 BRL th in 2006. Brazil Sales: General Use: Others nes: Apparatus for Filtering or Purifying Liquids data remains active status in CEIC and is reported by Brazilian Institute of Geography and Statistics. The data is categorized under Brazil Premium Database’s Machinery and Equipment Sector – Table BR.RMB002: Machinery and Equipment Sales: General Use.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Bilateral Filtering is a dataset for object detection tasks - it contains Nodules annotations for 280 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterReBeatICG database contains ICG (impedance cardiography) signals recorded during an experimental session of a virtual search and rescue mission with drones. It includes beat-to-beat annotations of the ICG characteristic points, made by a cardiologist, with the purpose of testing ICG delineation algorithms. A reference of synchronous ECG signals is included to allow comparison and mark cardiac events. Raw data The database includes 48 recordings of ICG and ECG signals from 24 healthy subjects during an experimental session of a virtual search and rescue mission with drones, described in [1]. Two segments of 5-minute signals are selected from each subject; one corresponding to baseline state (task BL) and the second one is recorded during higher levels of cognitive workload (task CW). In total, the presented database consisted of 240 minutes of ICG signals. During the experiment, various signals were recorded, but here only ICG and ECG data are provided. Raw data was recorded with 2000Hz using the Biopac system. Data Preprocessing (filtering) Further, for the purpose of annotation by cardiologists, data were first downsampled to 250Hz instead of 2000Hz. Further, it was filtered with an adaptive Savitzky-Golay filter of order 3. “Adaptive'' refers to the adaptive selection of filter length, which plays a major role in the efficacy of the filter. The filter length was selected based on the first 3 seconds of each signal recording SNR level, following the procedure described below. Starting from a filter length of 3 (i.e., the minimum length allowed), the length is increased in steps of two until signal SNR reaches 30 or the improvements are lower than 1% (i.e., the saturation of SNR improvement with further filter length increase). These values present a good compromise between reducing noise and over-smoothing of the signal (and hence potentially losing valuable details) and a lower filter length, thus reducing complexity. The SNR is calculated as a ratio between the 2-norm of the high and low signal frequencies considering 20Hz as cut-off frequency. Data Annotation In order to assess the performance of the ICG delineation algorithms, a subset of the database was annotated by a cardiologist from Lausanne University Hospital (CHUV) in Switzerland. The annotated subset consists of 4 randomly chosen signal segments containing 10 beats from each subject and task (i.e., 4 segments from BL and 4 from CW task). Segments of signals with artifacts and very noisy were excluded when selecting the data for annotation, and in this case, 8 segments were chosen from the task with cleaner signals. In total, 1920 (80x24) beats were selected for annotation. For each cardiac cycle, four characteristic points were annotated: B, C, X and O. The following definitions were used when annotating the data: - C peak -- Defined as the peak with the greatest amplitude in one cardiac cycle that represents the maximum systolic flow. - B point -- Indicates the onset of the final rapid upstroke toward the C point [3] that is expressed as the point of significant change in the slope of the ICG signal preceding the C point. It is related to the aortic valve opening. However, its identification can be difficult due to variations in the ICG signals morphology. A decisional algorithm has been proposed to guide accurate and reproducible B point identification [4]. - X point -- Often defined as the minimum dZ/dt value in one cardiac cycle. However, this does not always hold true due to variations in the dZ/dt waveform morphology [5]. Thus, the X point is defined as the onset of the steep rise in ICG towards the O point. It represents the aortic valve closing which occurs simultaneously as the T wave end on the ECG signal. - O point -- The highest local maxima in the first half of the C-C interval. It represents the mitral valve opening. Annotation was performed using open-access software (https://doi.org/10.5281/zenodo.4724843). Annotated points are saved in separate files for each person and task, representing the location of points in the original signal. Data structure Data is organized in three folders, one for raw data (01_RawData), filtered data (02_FilteredData), and annotated points (03_ExpertAnnotations). In each folder, data is separated into files representing each subject and task (except in 03_ExpertAnnotations where 2 CW task files were not annotated due to an excessive amount of noise). All files are Matlab .mat files. Raw data and filtered data .mat files contain „ICG“, „ECG“ synchronized data, as well as “samplFreq“values. In filtered data final chosen Savitzky-Golay filter length (“SGFiltLen”) is provided too. In Annotated data .mat file contains only matrix „annotPoints“ with each row representing one cardiac cycle, while in columns are positions of B, C, X and O points, respectively. Positions are expressed as a number of samples from the beginning of full database files (signals from 01_RawData and 02_FilteredData folders). In rare cases, there are less than 40 (or 80) values per file, when data was noisy and cardiologists couldn't annotate confidently each cardiac cycle. ------------------- References [1] F. Dell’Agnola, “Cognitive Workload Monitoring in Virtual Reality Based Rescue Missions with Drones.,” pp. 397–409, 2020, doi: 10.1007/978-3-030-49695-1_26. [2] H. Yazdanian, A. Mahnam, M. Edrisi, and M. A. Esfahani, “Design and Implementation of a Portable Impedance Cardiography System for Noninvasive Stroke Volume Monitoring,” J. Med. Signals Sens., vol. 6, no. 1, pp. 47–56, Mar. 2016. [3] A. Sherwood(Chair), M. T. Allen, J. Fahrenberg, R. M. Kelsey, W. R. Lovallo, and L. J. P. van Doornen, “Methodological Guidelines for Impedance Cardiography,” Psychophysiology, vol. 27, no. 1, pp. 1–23, 1990, doi: https://doi.org/10.1111/j.1469-8986.1990.tb02171.x. [4] J. R. Árbol, P. Perakakis, A. Garrido, J. L. Mata, M. C. Fernández‐Santaella, and J. Vila, “Mathematical detection of aortic valve opening (B point) in impedance cardiography: A comparison of three popular algorithms,” Psychophysiology, vol. 54, no. 3, pp. 350–357, 2017, doi: https://doi.org/10.1111/psyp.12799. [5] M. Nabian, Y. Yin, J. Wormwood, K. S. Quigley, L. F. Barrett, and S. Ostadabbas, “An Open-Source Feature Extraction Tool for the Analysis of Peripheral Physiological Data,” IEEE J. Transl. Eng. Health Med., vol. 6, p. 2800711, 2018, doi: 10.1109/JTEHM.2018.2878000.
Facebook
Twitter
According to our latest research, the global WASM Filters marketplace market size reached USD 524 million in 2024, reflecting robust adoption across diverse sectors. The market is projected to grow at a CAGR of 19.8% from 2025 to 2033, reaching a forecasted value of USD 2,606 million by 2033. This remarkable growth trajectory is primarily fueled by the increasing integration of WebAssembly (WASM) filters in network management, security, and data processing applications, as organizations seek enhanced performance, flexibility, and security in digital infrastructure.
A key growth factor driving the WASM Filters marketplace is the rising demand for high-performance, platform-agnostic filtering solutions in modern digital ecosystems. As enterprises and service providers migrate to microservices architectures and cloud-native platforms, the need for lightweight, portable, and easily updatable filters has become paramount. WASM filters, with their ability to run securely in sandboxed environments and deliver near-native execution speeds, are rapidly replacing traditional filtering mechanisms in network traffic management, application security, and data transformation. The proliferation of edge computing and the Internet of Things (IoT) further amplifies this demand, as WASM filters enable real-time data processing and policy enforcement at the network edge, reducing latency and improving overall system responsiveness.
Another significant driver is the escalating emphasis on cybersecurity and compliance in the wake of sophisticated cyber threats and stringent data privacy regulations. Security filters powered by WASM are increasingly deployed to inspect, validate, and modify network packets, web requests, and application data in real time, providing organizations with granular control over traffic flows and enhanced protection against attacks. The flexibility to customize and chain filters for specific use cases—such as deep packet inspection, anomaly detection, and protocol translation—empowers enterprises to rapidly adapt to evolving threat landscapes. This adaptability, coupled with the ability to deploy filters across cloud, on-premises, and hybrid environments, positions WASM filters as a cornerstone technology for secure digital transformation.
The ongoing evolution of cloud services and the expansion of 5G and IoT networks are also catalyzing the adoption of WASM filters. Service providers and cloud vendors are leveraging WASM-based filtering to optimize resource allocation, enforce dynamic policies, and deliver differentiated services to customers. In addition, the open-source nature of many WASM filter frameworks fosters a vibrant developer ecosystem, accelerating innovation and reducing time-to-market for new solutions. As organizations increasingly prioritize agility, scalability, and interoperability, the WASM Filters marketplace is poised for sustained expansion across both established and emerging economies.
From a regional perspective, North America currently dominates the WASM Filters market, driven by early adoption among technology giants, cloud providers, and digital-first enterprises. Europe follows closely, with strong regulatory impetus for secure data processing and cross-border compliance. The Asia Pacific region is witnessing the fastest growth, propelled by rapid digitization, government initiatives, and the proliferation of next-generation connectivity infrastructure. Latin America and the Middle East & Africa are also emerging as promising markets, as local enterprises and service providers invest in modernizing their digital infrastructure and enhancing cybersecurity capabilities.
The WASM Filters marketplace is segmented by filter type into Network Filters, Security Filters, Data Processing Filters, Custom Filters, and Others, each addressing unique requirements within modern digital ecosystems. Network filters constitute a substantial share of the market, as organizati
Facebook
TwitterContains scans of a bin filled with different parts ( screws, nuts, rods, spheres, sprockets). For each part type, RGB image and organized 3D point cloud obtained with structured light sensor are provided. In addition, unorganized 3D point cloud representing an empty bin and a small Matlab script to read the files is also provided. 3D data contain a lot of outliers and the data were used to demonstrate a new filtering technique.