Facebook
TwitterThe data in this dataset was collected by Yiqi Tang. The initial data came from the Internet, and then through manual data filtering, blurred, distorted and low resolution images were removed. Randomly divide into training and testing sets in a ratio of approximately 4:1. Afterwards, the training and testing sets were subjected to data augmentation such as stretching, inversion, and brightness adjustment. Finally, I used three data processing methods: 1. Gray scale 2. Edge extraction - low threshold 3. Edge extraction - high threshold
Facebook
TwitterThis systematic review of the literatu was conducted with the PRISMA method, to explore the contexts in which the use of open government data germinates, identifying barriers to its use and identifying, the role of data literacy among those barriers to use; and the role of open data in promoting informal learning that supports the development of critical data literacy. This file includes a codebook of the main characteristics that were studied in a systematic literature review, where data from 66 articles related to Open Data Usage were identified and coded. Also, the file includes an analysis of Cohen's Kappa, a concordance statistic used to measure the level of agreement among researchers in classifying articles on the characteristics defined in the Codebook. Finally, it includes main tables of the results' analysis.
Facebook
TwitterClassification accuracies (%) by comparing models using five data enhancement methods.
Facebook
TwitterIdentity resolution links inbound consumer data coming from sources such as web forms, online purchases, email, direct mail, and call centers, all in a privacy-compliant manner.
Matching offline data to more precise online deterministic data enables more precise online targeting by utilizing demographics such as age, income wealth and lifestyle.
Marketing attribution helps you understand which messages and offers are driving conversions.
Mobile location data helps you leverage privacy compliant mobile location data to infer interests, drive messaging and optimize timing.
Facebook
TwitterThe LOL, LOLv2-Real, LSRW, DICM, LIME, MEF and NPE datasets can be acquired from the following links
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset provides paired image samples captured under low-light and normal illumination conditions. It is structured to support research in image enhancement, restoration, and editing optimization.
Total Files: 970
Total Folders: 2 (short_exposure, long_exposure)
Image Format: PNG and RAW
Resolution: Up to 4240×2832 pixels
Scene Types: Indoor and outdoor environments
Camera Source: Sony imaging devices
Key Features :
image_name – Name of the image file
folder_type – Indicates whether the image is from short or long exposure set
exposure_ratio – Ratio between short and long exposure times
scene_id – Identifier for scene or capture set
brightness_level – Approximate luminance measure
camera_type – Device used to capture the image
Facebook
Twitterhttps://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy
Unlock the power of your data with advanced Data Enrichment Tools. Explore market size, CAGR, drivers, and trends for 2025-2033. Discover top solutions for B2B sales, marketing, and analytics.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Nanoparticles exhibit broad applications in materials mechanics, medicine, energy and other fields. The ordered arrangement of nanoparticles is very important to fully understand their properties and functionalities. However, in materials science, the acquisition of training images requires a large number of professionals and the labor cost is extremely high, so there are usually very few training samples in the field of materials. In this study, a segmentation method of nanoparticle topological structure based on synthetic data (SD) is proposed, which aims to solve the issue of small data in the field of materials. Our findings reveal that the combination of SD generated by rendering software with merely 15% Authentic Data (AD) shows better performance in training deep learning model. The trained U-Net model shows that Miou of 0.8476, accuracy of 0.9970, Kappa of 0.8207, and Dice of 0.9103, respectively. Compared with data enhancement alone, our approach yields a 1% improvement in the Miou metric. These results show that our proposed strategy can achieve better prediction performance without increasing the cost of data acquisition.
Facebook
TwitterSubscribers can find out export and import data of 23 countries by HS code or product’s name. This demo is helpful for market analysis.
Facebook
TwitterA low-light image enhancement dataset using wavelet-based diffusion models.
Facebook
TwitterThe Earth Surface Mineral Dust Source Investigation (EMIT) instrument measures surface mineralogy, targeting the Earth’s arid dust source regions. EMIT is installed on the International Space Station. EMIT uses imaging spectroscopy to take measurements of sunlit regions of interest between 52° N latitude and 52° S latitude. An interactive map showing the regions being investigated, current and forecasted data coverage, and additional data resources can be found on the VSWIR Imaging Spectroscopy Interface for Open Science (VISIONS) EMIT Open Data Portal.
In addition to its primary objective described above, EMIT has demonstrated the capacity to characterize methane (CH4) and carbon dioxide (CO2) point-source emissions by measuring gas absorption features in the shortwave infrared bands. The EMIT Level 2B Carbon Dioxide Enhancement Data (EMITL2BCO2ENH) Version 2 data product is a total vertical column enhancement estimate of carbon dioxide in parts per million meter (ppm m) based on an adaptive matched filter approach. EMITL2BCO2ENH provides per-pixel carbon dioxide enhancement data used to identify carbon dioxide plume complexes, per-pixel carbon dioxide uncertainty due to sensor noise, and per-pixel carbon dioxide sensitivity that can be used to remove bias from the enhancement data.
The EMITL2BCO2ENH Version 2 data product includes methane enhancement granules for all captured scenes, regardless of carbon dioxide plume complex identification. Each granule contains three Cloud Optimized GeoTIFF (COG) files at a spatial resolution of 60 meters (m): Carbon Dioxide Enhancement (EMIT_L2B_CO2ENH), Carbon Dioxide Uncertainty (EMIT_L2B_CO2UNCERT), and Carbon Dioxide Sensitivity (EMIT_L2B_CO2SENS). The EMITL2BCO2ENH COG files contain carbon dioxide enhancement data based primarily on EMITL1BRAD radiance values.
Each granule is approximately 75 kilometers (km) by 75 km, nominal at the equator, with some granules near the end of an orbit segment reaching 150 km in length.
Known Issues
Improvements/Changes from Previous Versions
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
The data accompanying the lecture about Image Enhancement from Anders Kaestner as part of the Quantitative Big Imaging Course.
The slides for the lecture are here
Facebook
TwitterThis dataset was created by Dicka taksa
Facebook
TwitterThe Earth Surface Mineral Dust Source Investigation (EMIT) instrument measures surface mineralogy, targeting the Earth’s arid dust source regions. EMIT is installed on the International Space Station (ISS) and uses imaging spectroscopy to take measurements of the sunlit regions of interest between 52° N latitude and 52° S latitude. An interactive map showing the regions being investigated, current and forecasted data coverage, and additional data resources can be found on the VSWIR Imaging Spectroscopy Interface for Open Science (VISIONS) EMIT Open Data Portal.In addition to its primary objective described above, EMIT has demonstrated the capacity to characterize methane (CH4) and carbon dioxide (CO2) point-source emissions by measuring gas absorption features in the short-wave infrared bands. The EMIT Level 2B Greenhouse Gas (GHG) series of products can be used to identify and quantify point source emissions. The EMIT Level 2B Methane Enhancement Data (EMITL2BCH4ENH) Version 1 data product is a total vertical column enhancement estimate of methane in parts per million meter (ppm m) based on an adaptive matched filter approach. EMITL2BCH4ENH provides per-pixel methane enhancement data used to identify methane plume complexes. The initial release of the EMITL2BCH4ENH data product will only include granules where methane plume complexes have been identified. Each granule contains one Cloud Optimized GeoTIFF (COG) file at a spatial resolution of 60 meters (m): Methane Enhancement (EMIT_L2B_CH4ENH). The EMITL2BCH4ENH file contains methane enhancement data based primarily on EMITL1BRAD radiance values.Each granule is approximately 75 kilometers (km) by 75 km, nominal at the equator, with some granules near the end of an orbit segment reaching 150 km in length.Known Issues* Data acquisition gap: From September 13, 2022, through January 6, 2023, a power issue outside of EMIT caused a pause in operations. Due to this shutdown, no data were acquired during that timeframe.
Facebook
TwitterLoans from the Oregon Credit Enhancement Fund (CEF) under ORS 285B.200. This is a loan insurance program available to lenders to assist businesses in obtaining access to capital. For more information visit https://www.oregon.gov/biz/programs/CEF/Pages/default.aspx
Facebook
TwitterData has changed business practices in France. The use of data by insurance professionals allows them, among other things, to improve the relationship with their customers. Aware of the potential of data for their growth, it appears that the main strategy for enhancing the value of collected data was (for ** percent of the respondents) to use data mining technology.
Facebook
TwitterIn this paper, we propose a novel approach to reduce the noise in Synthetic Aperture Radar (SAR) images using particle filters. Interpretation of SAR images is a difficult problem, since they are contaminated with a multiplicative noise, which is known as the “Speckle Noise”. In literature, the general approach for removing the speckle is to use the local statistics, which are computed in a square window. Here, we propose to use particle filters, which is a sequential Bayesian technique. The proposed method also uses the local statistics to denoise the images. Since this is a Bayesian approach, the computed statistics of the window can be exploited as a priori information. Moreover, particle filters are sequential methods, which are more appropriate to handle the heterogeneous structure of the image. Computer simulations show that the proposed method provides better edge-preserving results with satisfactory speckle removal, when compared to the results obtained by Gamma Maximum a posteriori (MAP) filter.
Facebook
TwitterSuccess.ai empowers businesses with dynamic, enterprise-grade B2B company datasets, enabling deep insights into over 28 million verified company profiles, including specialized segments like e-commerce and private companies. Ideal for those targeting diverse company types, our data supports strategic initiatives from sales to competitor analysis.
Key Use Cases Enhanced by Success.ai:
Why Choose Success.ai?
Get Started with Success.ai Today: Partner with us to harness the power of detailed and expansive company data. Whether for enriching your sales processes, conducting in-depth competitor analysis, or enhancing your overall data strategy, Success.ai provides the tools and insights necessary to propel your business to new heights.
Contact us to explore how our tailored data solutions can transform your business operations and strategic initiatives.
Remember, with Success.ai, no one beats us on price. Period.
Facebook
TwitterSuccess.ai’s B2B Contact Data Enrichment API empowers businesses to optimize their sales and marketing initiatives by providing seamless access to verified, continuously updated B2B contact information. Leveraging a database of over 700 million global profiles, our API enriches your existing records with critical data points, including job titles, work emails, phone numbers, LinkedIn URLs, and more.
This real-time, AI-validated enrichment ensures that you are always engaging with the most relevant and high-potential prospects. Supported by our Best Price Guarantee, the Contact Enrichment API is indispensable for organizations aiming to streamline workflows, improve targeting, and maximize conversion rates.
Why Choose Success.ai’s Contact Enrichment API?
Real-Time Enrichment for Precision Outreach
Comprehensive Global Coverage
Anytime Access with Powerful Filtering
Ethical and Compliant
Data Highlights:
Key Features of the Contact Enrichment API:
Seamless Integration with Your Systems
Granular Filtering and Query Capabilities
Real-Time Updates and Continuous Enrichment
AI-Validated Accuracy
Strategic Use Cases:
Sales and Lead Generation
Marketing Campaigns and ABM Strategies
Partnership Development and Vendor Evaluation
Recruitment and Talent Acquisition
Why Choose Success.ai?
Best Price Guarantee
Seamless Integration
Data Accuracy with AI Validation
Custo...
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Digital Attachment with Data for the Paper: Enhancement of 3D Camera Synthetic Training Data with Noise Models.
Facebook
TwitterThe data in this dataset was collected by Yiqi Tang. The initial data came from the Internet, and then through manual data filtering, blurred, distorted and low resolution images were removed. Randomly divide into training and testing sets in a ratio of approximately 4:1. Afterwards, the training and testing sets were subjected to data augmentation such as stretching, inversion, and brightness adjustment. Finally, I used three data processing methods: 1. Gray scale 2. Edge extraction - low threshold 3. Edge extraction - high threshold