https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
The digital image processing market is projected to reach a value of XXX million by 2033, expanding at a CAGR of XX% during the forecast period (2025-2033). The market is driven by the increasing demand for digital images in various applications, including healthcare, manufacturing, and entertainment. The availability of advanced image processing algorithms and the proliferation of high-resolution imaging devices are also contributing to the market's growth. The digital image processing market is segmented based on application (medical imaging, remote sensing, industrial inspection, and others) and type (2D, 3D, and 4D). The medical imaging segment holds a significant market share due to the increasing use of medical imaging techniques for disease diagnosis and treatment planning. The industrial inspection segment is also expected to witness significant growth, driven by the increasing demand for automated inspection systems in manufacturing plants. The major companies operating in the digital image processing market include IBM, AWS, Google, Microsoft, Trax, Canon, Casio, Epson, Olympus, and Nikon. North America is expected to dominate the market, followed by Asia Pacific and Europe.
This child data release includes hyperspectral and RGB images acquired from an Unmanned Aircraft System (UAS) during an experiment performed at the USGS Columbia Environmental Research Center, near Columbia, Missouri, on April 2, 2019. The purpose of the experiment was to assess the feasibility of inferring concentrations of a visible dye (Rhodamine WT) tracer from various types of remotely sensed data in water with varying levels of turbidity. Whereas previous research on remote sensing of tracer dye concentrations has focused on clear-flowing streams, the Missouri River is much more turbid and the reflectance signal associated with the sediment-laden water could obscure that related to the presence and amount of dye. This experiment thus provided an initial test of the potential to map dye concentrations from remotely sensed data in more turbid rivers like the Missouri, where tracer studies involving the release of a visible dye can provide insight regarding the dispersal of endangered sturgeon larvae. The experiment involved manipulating the turbidity and Rhodamine WT dye concentration in two water tanks, acquring hyperspectral and RGB images, and attempting to infer dye concentrations from the images for varying levels of turbidity. Hyperspectral imagery (HSI) was collected with a Headwall Nano-Hyperspec (Headwall Photonics, Bolton, MA), a pushbroom sensor that measures reflectance from 400 - 1000 nm in the VNIR (visual and near-infrared). Sensor calibration was performed by collecting a dark reference with the lens cap on, and a white reference with a 25.4 cm x 25.4 cm Labsphere Spectralon? (Labsphere, INC, North Sutton, NH) calibrated diffuse reference target that reflects 99% of light in accordance with the National Institute of Standards and Technology. The sensor was mounted to a DJI Ronin-MX gimbal (DJI, Shenzhen, China) affixed to a DJI M600 Pro unmanned aerial vehicle (UAV) and flown 30 m above the tanks, yielding a ground sampling distance of 2 cm. The gimbal provides stability for the payload which aids in post-processing of the HSI. The UAV repeated a flight plan over the two tanks to create image data cubes. The resulting HSI was radiometrically corrected in the Headwall HyperspecIII SpectralView software package to convert raw digital numbers to radiance. The same software was used to orthorectify the images, which applies latitude and longitude GPS information to the cubes using data from an Xsens MTi-G-710 inertial measurement unit (IMU; Xsens, Enschede, The Netherlands). The white reference was included in each scene and used to make atmospheric corrections in ENVI (Harris Geospatial Solutions, Inc., Broomfield, CO) to convert radiance to relative reflectance. Timestamps from HSI were then compared to time stamps from the field spectra in a related data release to select only the data cubes that were nearest in time to when the field spectra were recorded. The RGB images were acquired using the built-in 12 megapixel camera on a DJI Mavic Pro UAV with an on-board GPS that collected position data during the flights. Images were acquired on a two-second interval while the UAV hovered in the same position. The original RGB images were used directly without further pre-processing. Time stamps for the images were used to link them to turbidity and concentration measurements made in situ in each tank during the experiment. The image data are compiled in a set of zip files, two for the hyperspectral images and one for the RGB images, and a text file listing the time stamps and file names for both types of images. The hyperspectral and RGB images selected for analysis were based on the time interval during which field spectra were recorded. The RGB image closest in time to each of the hyperspectral images used was selected from the list of available RGB images.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This course explores the theory, technology, and applications of remote sensing. It is designed for individuals with an interest in GIS and geospatial science who have no prior experience working with remotely sensed data. Lab exercises make use of the web and the ArcGIS Pro software. You will work with and explore a wide variety of data types including aerial imagery, satellite imagery, multispectral imagery, digital terrain data, light detection and ranging (LiDAR), thermal data, and synthetic aperture RaDAR (SAR). Remote sensing is a rapidly changing field influenced by big data, machine learning, deep learning, and cloud computing. In this course you will gain an overview of the subject of remote sensing, with a special emphasis on principles, limitations, and possibilities. In addition, this course emphasizes information literacy, and will develop your skills in finding, evaluating, and using scholarly information. You will be asked to work through a series of modules that present information relating to a specific topic. You will also complete a series of lab exercises to reinforce the material. Lastly, you will complete paper reviews and a term project. We have also provided additional bonus material and links associated with surface hydrologic analysis with TauDEM, geographic object-based image analysis (GEOBIA), Google Earth Engine (GEE), and the geemap Python library for Google Earth Engine. Please see the sequencing document for our suggested order in which to work through the material. We have also provided PDF versions of the lectures with the notes included.
U.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
For this project, the potential of using state-of-the-art aerial digital framing cameras that have time delayed integration (TDI) to acquire useful low light level imagery was enhanced. Computational photography is an emerging field of study pertaining to capturing, processing and manipulating digital imagery with the purpose of enhancing and improving the imagery beyond what is typically accomplished using traditional image processing techniques.
While computational photography techniques have been extensively applied to computer vision and computer graphics problems and are becoming more common in consumer cameras and mobile devices, they have only limitedly been applied within the remote sensing community. With increased computer processing power and awareness of the utility of computational photography, these techniques are now beginning to be applied to the remote sensing image processing chain. This project made use of two computational photography techniques, high dynamic range (HDR) imagery formulation and bilateral filters to enable novel imaging applications. By carefully combining multiple data sets, the effective dynamic range within the image can be increased without over or underexposing portions of the scene. Using this technique, HDR image products were produced from imagery acquired under extreme low light level conditions.
This project made use of two computational photography techniques, high dynamic range (HDR) imagery formulation and bilateral filters to enable novel imaging applications in support of developing a low light level imaging capability to improve imagery. HDR imaging is a technique that generates an image with a greater dynamic range than ordinarily achievable given an imaging system’s hardware architecture. HDR images are generated by acquiring multiple images of the same scene at different exposure settings. Each individual image contains a collection of properly exposed pixels and pixels that are both dark (underexposed) and saturated (overexposed). HDR image products are generated by combining multiple frames of data at different exposure times such that the darkest areas within a frame are imaged with the longest exposure time and the brightest areas within a frame are imaged with the shortest exposure time. This technique can be very powerful when processing imagery acquired under low light level conditions. Standard imagery acquired under low light often contains a significant number of pixels that are extremely dark whereby information content is lost in the shadows. Bilateral filters reduce the noise in relatively uniform areas within an image while minimizing blurring of edges and other spatial features. Edge preserving noise reduction filters are important for improving the imagery of poorly lit scenes. These filters can be used to improve the quality of imagery acquired under low light level conditions. This type of filter preserves edges by only allowing pixels with similar radiometric values to be included in the spatial filter. By looping through each pixel within an image and assigning weights to adjacent pixels, the entire image is processed.
However, implementation of the bilateral filter can be computationally intensive, so alternative algorithms that rely on approximations were developed. The bilateral filter described above was implemented in Matlab® and a simple simulated edge target image was constructed to functionally test the algorithms. For both sets of images noise levels (2% and 4%), and the bilateral filter results were more pronounced as light level and image quality decreased. By using these two computational photography techniques, representative HDR image products with imagery acquired under extreme low light conditions were successfully produced.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset was developed at the School of Electrical and Computer Engineering (ECE) at the Georgia Institute of Technology as part of the ongoing activities at the Center for Energy and Geo-Processing (CeGP) at Georgia Tech and KFUPM. LANDMASS stands for “LArge North-Sea Dataset of Migrated Aggregated Seismic Structures”. This dataset was extracted from the North Sea F3 block under the Creative Commons license (CC BY-SA 3.0). For the purposes of this work, four classes of seismic structures: Horizons, Chaotics, Faults, and Salt Domes, were automatically extracted from the F3 block using a curvelet based distance measure for seismic images also proposed by the CeGP. Then the extracted images where manually verified and all outliers or images that had more than one structure were removed.
Mineral groups identified through automated analysis of remote sensing data acquired by the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) were used to generate a map showing the type and spatial distribution of hydrothermal alteration, other exposed mineral groups, and green vegetation across the southwestern conterminous United States. Boolean algebra was used to combine mineral groups identified through analysis of visible, near-infrared, and shortwave-infrared ASTER data into attributed alteration types and mineral classes based on common mineralogical definitions of such types and the minerals present within the mineral groups. Alteration types modeled in this way can be stratified relative to acid producing and neutralizing potential to aid in geoenvironmental watershed studies. This mapping was performed in support of multidisciplinary studies involving the predictive modeling of mineral deposit occurrence and geochemical environments at watershed to regional scales. These studies seek to determine the relative effects of mining and non-anthropogenic hydrothermal alteration on watershed surface water geochemistry and faunal populations. The presence or absence of hydrothermally-altered rocks and (or) specific mineral groups can be used to model the favorability of occurrence of certain types of mineral deposits, and aid in the delineation of permissive tracts for these deposits. These data were used as a data source for the U.S. Geological Survey (USGS) Sagebrush Mineral-Resource Assessment (SaMiRA). This map, in ERDAS Imagine (.img) format, has been attributed by pixel value with material identification data that can be queried in most image processing and GIS software packages. Three files are included with this product: file with .img extension contains thematic image attributes and geographic projection data, file with .ige extension contains the raster data, and the file with .rrd extension includes pyramid data for fast display.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Metadata record for data from ASAC Project 291 See the link below for public details on this project.
From the abstracts of the referenced papers:
Ground surveys of the ice sheet in Wilkes Land, Antarctica, have been made on oversnow traverses operating out of Casey. Data collected include surface elevation, accumulation rate, snow temperature, and physical surveys, the data are mostly restricted to line profiles. In some regions, aerial surveys of surface topology have been made over a grid network. Satellite imagery and remote sensing are two means of extrapolating the results from measurements along lines to an areal presentation. They are also the only source of data over large areas of the continent. Landsat images in the visible and near infra-red wavelengths clearly depict many of the large- and small-scale features of the surface. The intensity of the reflected radiation varies with the aspect and magnitude of the surface slope to reveal the surface topography. The multi-channel nature of the Landsat data are exploited to distinguish between different surface types through their different spectral signatures, e.g. bare ice, glaze, snow, etc. Additional information on surface type can be gained at a coarser scale from other satellite-borne sensors such as the ESMR, SMMR, etc. Textural enhancement of the Landsat images reveals the surface micro-relief. Features in the enhanced images are compared to ground-truth data from the traverse surveys to produce a classification of the surface types across the images and to determine the magnitude of the surface topography and micro-relief observed. The images can then be used to monitor changes over time.
Landsat imagery of the Antarctic ice sheet and glaciers exhibit features that move with the ice and others that are fixed in space. Two images covering the same area but acquired at different times are compared to obtain the displacement of features. Where the time lapse is large, the displacement of obvious features can be scaled from photographic prints. When the two images are co-registered finer features and displacements can be resolved to give greater detail.
Remote sensing techniques can be used to investigate the dynamics and surface characteristics of the Antarctic ice sheet and its outlet glaciers. This paper describes a methodology developed to map glacial movement velocities from LANDSAT MSS data, together with an assessment of the accuracy achieved. The velocities are derived by using digital image processing to register two temporally separated LANDSAT images of the Denman glacier and Shackleton Ice Shelf region. A derived image map is compared with existing maps of the region to substantiate the measured velocities. The velocity estimates from this study were found to correspond closely with ground-based measurements in the study area.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This course explores the theory, technology, and applications of remote sensing. It is designed for individuals with an interest in GIS and geospatial science who have no prior experience working with remotely sensed data. Lab exercises make use of the web and the ArcGIS Pro software. You will work with and explore a wide variety of data types including aerial imagery, satellite imagery, multispectral imagery, digital terrain data, light detection and ranging (LiDAR), thermal data, and synthetic aperture RaDAR (SAR). Remote sensing is a rapidly changing field influenced by big data, machine learning, deep learning, and cloud computing. In this course you will gain an overview of the subject of remote sensing, with a special emphasis on principles, limitations, and possibilities. In addition, this course emphasizes information literacy, and will develop your skills in finding, evaluating, and using scholarly information.
You will be asked to work through a series of modules that present information relating to a specific topic. You will also complete a series of lab exercises to reinforce the material. Lastly, you will complete paper reviews and a term project. We have also provided additional bonus material and links associated with surface hydrologic analysis with TauDEM, geographic object-based image analysis (GEOBIA), Google Earth Engine (GEE), and the geemap Python library for Google Earth Engine. Please see the sequencing document for our suggested order in which to work through the material. We have also provided PDF versions of the lectures with the notes included.
Mineral groups identified through automated analysis of remote sensing data acquired by the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) were used to generate a map showing the type and spatial distribution of hydrothermal alteration, other exposed mineral groups, and green vegetation across the northwestern conterminous United States. Boolean algebra was used to combine mineral groups identified through analysis of visible, near-infrared, and shortwave-infrared ASTER data into attributed alteration types and mineral classes based on common mineralogical definitions of such types and the minerals present within the mineral groups. Alteration types modeled in this way can be stratified relative to acid producing and neutralizing potential to aid in geoenvironmental watershed studies. This mapping was performed in support of multidisciplinary studies involving the predictive modeling of mineral deposit occurrence and geochemical environments at watershed to regional scales. These studies seek to determine the relative effects of mining and non-anthropogenic hydrothermal alteration on watershed surface water geochemistry and faunal populations. The presence or absence of hydrothermally-altered rocks and (or) specific mineral groups can be used to model the favorability of occurrence of certain types of mineral deposits, and aid in the delineation of permissive tracts for these deposits. These data were used as a data source for the U.S. Geological Survey (USGS) Sagebrush Mineral-Resource Assessment (SaMiRA). This map, in ERDAS Imagine (.img) format, has been attributed by pixel value with material identification data that can be queried in most image processing and GIS software packages. Three files are included with this product: file with .img extension contains thematic image attributes and geographic projection data, file with .ige extension contains the raster data, and the file with .rrd extension includes pyramid data for fast display.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
For more details, please refer to our paper and visit our GitHub repository.
TL;DR:
SynRS3D is a comprehensive synthetic remote sensing dataset designed to improve global 3D semantic understanding from monocular high-resolution imagery. It includes data for three key tasks:
The dataset consists of 17 folders and includes a total of 69,667 images at a resolution of 512x512. After downloading and extracting the files, ensure the directory structure follows this format:
${DATASET_ROOT} # Example: /home/username/project/SynRS3D/data/grid_g05_mid_v1
├── opt # RGB images (.tif), also used as post-event images for building change detection
├── pre_opt # RGB images (.tif), used as pre-event images for building change detection
├── gt_nDSM # Normalized Digital Surface Model (nDSM) images (.tif)
├── gt_ss_mask # Land cover mapping labels (.tif)
├── gt_cd_mask # Building change detection masks (.tif, 0 = no change, 255 = change area)
└── train.txt # List of training data filenames
The land cover mapping labels (`gt_ss_mask`) are mapped to the following categories:
The dataset is organized into grid-like and irregular terrain. It includes a range of ground sampling distances (GSDs) and variations in building heights. The folder naming convention indicates these characteristics:
- `grid` = grid-like terrain
- `terrain` = irregular terrain
- `g005`, `g05`, `g1` = GSD ranges (0.05m–0.3m, 0.3m–0.6m, and 0.6m–1m, respectively)
- `low`, `mid`, `high` = building height variations
The dataset includes the following image counts:
- 1,430 images – `terrain_g05_mid_v1`
- 10,000 images – `grid_g05_mid_v2`
- 2,354 images – `terrain_g05_low_v1`
- 3,707 images – `terrain_g05_high_v1`
- 880 images – `terrain_g005_mid_v1`
- 2,127 images – `terrain_g005_low_v1`
- 11,325 images – `grid_g005_mid_v2`
- 1,212 images – `terrain_g005_high_v1`
- 348 images – `terrain_g1_mid_v1`
- 4,285 images – `terrain_g1_low_v1`
- 904 images – `terrain_g1_high_v1`
- 3,000 images – `grid_g005_mid_v1`
- 2,997 images – `grid_g005_low_v1`
- 4,000 images – `grid_g005_high_v1`
- 7,000 images – `grid_g05_mid_v1`
- 7,098 images – `grid_g05_low_v1`
- 7,000 images – `grid_g05_high_v1`
If you find SynRS3D useful in your research, please consider citing:
For any questions or feedback, feel free to reach out via email: song@ms.k.u-tokyo.ac.jp.
Enjoy using SynRS3D!
The National Land Cover Database products are created through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (EPA), the U.S. Department of Agriculture - Forest Service (USDA-FS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM) and the USDA Natural Resources Conservation Service (NRCS). The success of NLCD over nearly two decades is credited to the continuing collaborative spirit of the agencies that make up the MRLC. NLCD 2011 is the most up-to-date iteration of the National Land Cover Database, the definitive Landsat-based, 30-meter resolution land cover database for the Nation. The data in NLCD 2011 are completely integrated with NLCD 2001 (2011 Edition, amended 2014) and NLCD 2006 (2011 Edition, amended 2014). For NLCD 2011, there are 5 primary data products: 1) NLCD 2011 Land Cover 2) NLCD 2006/2011 Land Cover Change Pixels labeled with the 2011 land cover class 3) NLCD 2011 Percent Developed Imperviousness 4) NLCD 2006/2011 Percent Developed Imperviousness Change Pixels 5) NLCD 2011 Tree Canopy Cover provided by an MRLC partner - the USDA Forest Service Remote Sensing Applications Center. In addition, ancillary metadata includes the NLCD 2011 Path/Row Index shapefile showing the footprint of Landsat scenes and change analysis pairs used to derive 2006/2011 spectral change. All Landsat scene acquisition dates are included in the shapefile's attribute table. As part of the NLCD 2011 project, NLCD 2001 and 2006 land cover and impervious data products were revised and reissued (2011 Edition, amended 2014) to provide full compatibility with the new NLCD 2011 products. The 2014 amended version corrects for the over-elimination of small areas of the four developed classes. NLCD Tree Canopy Cover was created using MRLC mapping zones from NLCD 2001 (see Tree Canopy Cover metadata for additional detail). All other NLCD 2011 products were created on a path/row basis and mosaicked to create a seamless national product. Questions about the NLCD 2011 land cover product can be directed to the NLCD 2011 land cover mapping team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This report provides details for the Copper River Delta Area of the Alaska Region Existing Vegetation Mapping Effort completed in 2013. An existing vegetation map was prepared in a collaborative effort between the Chugach National Forest, Alaska Regional Office (Region 10), Ducks Unlimited, and the Remote Sensing Applications Center (RSAC). This map was designed to be consistent with the standards established in the Existing Vegetation Classification and Technical Guide (Nelson et al., 2015) and to provide baseline information to support project planning and management of the Copper River Delta. The final map comprises 15 land cover types, including 11 vegetation classes and 4 non-vegetated classes. Geospatial data, including remotely sensed imagery, a digital surface model, and ancillary data were assembled. A semi-automated image segmentation process was used to develop the modeling units (mapping polygons), which represented relatively homogeneous areas of land cover to be classified. Land cover class determinations were made for field visited reference sites and subsequently used to develop predictive random forest classification models. Photo interpretation was then used to evaluate individual map models and manually edit interim maps. This process utilized various Forest Service Enterprise software packages and the most contemporary mapping methods. Once the final map was produced, an accuracy assessment was conducted to reveal individual class confusion and provide additional insight into the reliability of the final map for resource applications. Overall accuracy of the final vegetation map was 82 percent.
Mass-wasting events that displace water, whether they initiate from underwater sources (submarine landslides) or subaerial sources (subaerial-to-submarine landslides), have the potential to cause tsunami waves that can pose a significant threat to human life and infrastructure in coastal areas (for example towns, cruise ships, bridges, oil platforms, and communication lines). Sheltered inlets and narrow bays can be locations of especially high risk as they often have higher human populations, and the effects of water displacement from moving sediment can be amplified as compared to the effects from similarly sized mass movements in open water. In landscapes undergoing deglaciation, such as the fjords and mountain slopes adjacent to tidewater glaciers found in Southeast Alaska, glacial retreat and permafrost decay can destabilize rock slopes and increase landslide potential. Establishing and maintaining inventories of subaerial and submarine landslides in such environments is critical for identifying the magnitude and frequency of past events, as well as for assessing areas that may be susceptible to failures in the future. To maintain landslide inventories, multi-temporal surveys are needed. High-resolution digital elevation models (DEM) and aerial imagery can be used to establish and maintain subaerial landslide inventories, but repeat bathymetric surveys to detect submarine landslides are generally less available than their terrestrial counterparts. However, existing bathymetry can be used to establish a spatial inventory of landslides on the seafloor to provide a baseline for understanding the magnitude of past events and for locating areas of high submarine landslide susceptibility. These data can then be used to address how future failures and the tsunamis that they could trigger could impact surrounding areas. Here, we present an inventory of mapped landslide features in Glacier Bay, Alaska that includes landslide source areas, deposits, and scarps. This data release contains geographic information system (GIS) polygons and polylines for these mapped features; the underlying digital elevation model (DEM) raster compiled from available bathymetry from the National Oceanic and Atmospheric Administration (NOAA) and the U.S. Geological Survey (USGS); a slope map created from the compiled DEM; ¬and a derivative topographic openness map used to help identify the landslide features. Bathymetric DEMs used in the compilation cover 1012.5 sq. km, which represents approximately 80% of the total area of Glacier Bay. The DEMs were collected in 2001 and 2009 for the southern and northern parts of the bay, respectively. To minimize resolution bias and maximize mapping consistency while maintaining visual fidelity, we re-sampled all the original bathymetry (resolution ranging from 1 to 16 m) to 5 m, which represents the minimum resolution for the majority of mapped areas; the lower resolution areas generally covered deeper and flatter portions of the bay where fewer landslides were present. For mapping, we used a topographic openness map (Yokoyama and others, 2002) in combination with a traditional slope map (see Red Relief Image Map in Chiba and others, 2008), which allows for good discernment of subtle concavities and convexities in the bathymetry and is well-suited for identifying landslide scars and deposits. We classified mapped landslides based on their source area type and used two primary classification categories of “slide” and “debris flow”. We used a third category, “mixed”, to classify landslides that showed evidence of both types of source area contributing to the deposit. For each landslide classified as slide or mixed, we mapped the source area and deposit as separate polygons. For landslides classified as debris flow, we mapped only deposits. Since debris flow source areas are subaerial drainage basins, delineating them should be part of larger future subaerial landslide mapping efforts in Glacier Bay National Park and Preserve. Similarly, for mixed landslides, we delineated source areas as the slide contribution area and not the larger debris-flow drainage basin component. For any source areas (for mixed and slide polygons) or deposits that included a subaerial portion, we used 2012 5-m IFSAR data, and Landsat and DigitalGlobe imagery to map subaerial parts of the polygons. IFSAR and Landsat data are available from Earth Explorer (https://earthexplorer.usgs.gov/) and DigitalGlobe imagery is available from DigitalGlobe (https://www.digitalglobe.com/). These data and images are not included in this data release. Thirty-five of the forty-four slide and mixed features initiated as subaerial landslides. However, in all cases, we only mapped landslides if we could identify a submarine deposit. For example, we did not map the subaerial Tidal Inlet landslide (Wieczorek and others, 2007) because we could not identify a submarine deposit associated with it. Additionally, we did not map subaerial and submarine deposits that appeared to be deposited by water-dominated flows (e.g., alluvial fans and fan deltas), or large submarine fans that likely resulted from turbidite flows, such as the one at the junction of Queen Inlet and Glacier Bay. Because we could not observe mapped submarine landslides in the field, we assigned a level of moderate (77 landslides) or high (31 landslides) confidence based on our certainty that the mapped features represented actual slope failures. We omitted low confidence landslides from the map. In total, we mapped 108 landslides, with 22, 64, and 22 classified as slide, debris flow, and mixed, respectively. The total area (source and deposit) for slide and mixed type landslides ranged from 0.026 to 2.35 sq. km. Debris-flow deposits ranged from 0.012 to 0.61 sq. km. Finally, we mapped a total of 7,097 individual landslide scarps where we could not identify any clear associated deposits, and where the distance between lateral flanks was approximately 50 m or more. Though we did our best to map only arcuate-shaped scarps typically formed by landslides (that is, single-mass failures), as opposed to geomorphic features formed by gradual glacial or submarine-current-related erosion (for example, submarine canyon walls), we acknowledge that some mapped scarps may have been formed by processes other than landsliding. Thus, for purposes of landslide susceptibility mapping, these scarp data are intended to be used in conjunction with other data, such as slope angle, geologic substrate, or geomorphic units. Ultimately, the full dataset is meant to serve as a qualitative component to inform future submarine and subaerial landslide susceptibility assessments in Glacier Bay National Park and Preserve. Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the U.S. Government. References used: Chiba, T., Kaneta, S., and Suzuki, Y., 2008, Red relief image map: new visualization method for three dimensional data: The international archives of the photogrammetry, remote sensing and spatial information sciences, v. 37, no. B2, p. 1071–1076. Wieczorek, G.F., Geist, E.L., Motyka, R.J., Jakob, M., 2007, Hazard assessment of the tidal inlet landslide and potential subsequent tsunami, Glacier Bay National Park, Alaska: Landslides, v. 4 p. 205–215. Yokoyama, R., Shirasawa, M., and Pike, R.J., 2002, Visualizing topography by openness: a new application of image processing to digital elevation models: Photogrammetric engineering and remote sensing, v. 68, no. 3, p. 257–266.
This child data release provides the information needed to download from the USGS EarthExplorer portal digital orthophotos acquired during a tracer experiment performed on the Missouri River near Columbia, Missouri, on May 5, 2021. One of the primary goals of this tracer experiment was to assess the feasibility of inferring concentrations of a visible dye (Rhodamine WT) from various types of remotely sensed data in a large, highly turbid natural river channel. Previous research on remote sensing of tracer dye concentrations has focused on clear-flowing streams, but the Missouri River is much more turbid. As a result, the effect of the dye on the reflectance of the water could be obscured by the effects of suspended sediment on reflectance. This experiment thus provided an initial test of the potential to map dye concentrations from remotely sensed data in more turbid rivers like the Missouri. The experiment involved introducing a pulse of Rhodamine WT dye into the channel at an upstream transect and then observing the dispersion of the dye along the river using various in situ and remote sensing instruments. A flight contractor, Surdex Corporation, was enlisted to acquire digital orthophotography of the Missouri River area near Columbia MO, spanning the approximately 7 mile reach of the channel from river miles 176-183, during the experiment. Eight lines were flown starting around 9 am and flown 10 to 20 minutes apart, ending at 11:25 am central standard time. The images were captured with a Leica ADS100 Digital Mapping Camera. All survey ground control was also acquired and processed by Surdex, imagery was controlled using Airborne GPS/IMU technology on board the aircraft at the time of acquisition and processed against a stationary GPS base station. Four band digital imagery was processed and triangulated and then the imagery was fully orthorectfied and moaicked for 10cm digital orthophotography delivered as 4-band tiles. The resulting data set consists of orthophotos with a 10 cm pixel size. Surdex Corporation used the raw imagery to produce high resolution 10 cm 4-band (red, green, blue, and near-infrared) orthophotos for each of eight passes over the project area of interest. Tiled deliverable products were created from a custom tiling scheme consisting of 19 tiles for each of the eight flight lines and consist of 4 band tiff files with corresponding *.tfw world files. The data set delivered by the flight contractor was transferred to the USGS Earth Resources Observation and Science (EROS) Center for archiving and distribution via the EarthExplorer web portal at https://earthexplorer.usgs.gov. EROS also produced metadata describing the orthophotos in the file EROSmetadata.csv. The orthophotos can can be obtained by visiting the EarthExplorer web site at https://earthexplorer.usgs.gov/and using the Entity ID field in the EROSmetadata.csv file. On the EarthExplorer home page, go to the second tab of the panel on the left, labeled Data Sets, select Aerial Imagery/High Resolution Orthoimagery, and click on Additional Criteria at the bottom. On the Additional Criteria tab, click the plus symbol next to Entity ID, enter the Entity ID value from the EROSmetadata.csv file for the tile of interest, and click on Results at the bottom. The tile should then appear in the results tab with several options represented by icons to show the footprint, overlay a browse image, or show the metadata and browse in a separate window. To download the data, click on the fifth icon from the left, which features a green download arrow pointing toward a disk drive, and click Download on the resulting pop-up to begin downloading a zip file. This zip archive contains a number of files in two subfolders. For example, for line 1, tile 10: 1) 4023644_line1_10.zip\MO\2021\202106_missouri_river_dye_columbia_mo_10cm_utm15_cnir\index001 contains shapefiles of tile layouts and exposure times for each flight line and tile and the metadata and ortho accuracy reports from the flight contractor. The folder 2) 4023644_line1_10.zip\MO\2021\202106_missouri_river_dye_columbia_mo_10cm_utm15_cnir\vol001 has the actual image as a tif (like line1_10.tif) and corresponding *.tfw world file (like line1_10.tfw). These files can be opened and viewed in GIS or image processing software.
Multispectral remote sensing data acquired by Landsat 8 Operational Land Imager (OLI) sensor were analyzed using an automated technique to generate surficial mineralogy and vegetation maps of the conterminous western United States. Six spectral indices (e.g. band-ratios), highlighting distinct spectral absorptions, were developed to aid in the identification of mineral groups in exposed rocks, soils, mine waste rock, and mill tailings across the landscape. The data are centered on the Western U.S. and cover portions of Texas, Oklahoma, Kansas, the Canada-U.S. border, and the Mexico-U.S. border during the summers of 2013 – 2014. Methods used to process the images and algorithms used to infer mineralogical composition of surficial materials are detailed in Rockwell and others (2021) and were similar to those developed by Rockwell (2012; 2013). Final maps are provided as ERDAS IMAGINE (.img) thematic raster images and contain pixel values representing mineral and vegetation group classifications. Rockwell, B.W., 2012, Description and validation of an automated methodology for mapping mineralogy, vegetation, and hydrothermal alteration type from ASTER satellite imagery with examples from the San Juan Mountains, Colorado: U.S. Geological Survey Scientific Investigations Map 3190, 35 p. pamphlet, 5 map sheets, scale 1:100,000, http://doi.org/10.13140/RG.2.1.2769.9365. Rockwell, B.W., 2013, Automated mapping of mineral groups and green vegetation from Landsat Thematic Mapper imagery with an example from the San Juan Mountains, Colorado: U.S. Geological Survey Scientific Investigations Map 3252, 25 p. pamphlet, 1 map sheet, scale 1:325,000, http://doi.org/10.13140/RG.2.1.2507.7925. Rockwell, B.W., Gnesda, W.R., and Hofstra, A.H., 2021, Improved automated identification and mapping of iron sulfate minerals, other mineral groups, and vegetation from Landsat 8 Operational Land Imager Data: San Juan Mountains, Colorado, and Four Corners Region: U.S. Geological Survey Scientific Investigations Map 3466, scale 1:325,000, 51 p. pamphlet, https://doi.org/10.3133/sim3466/.
International Journal of Engineering and Advanced Technology FAQ - ResearchHelpDesk - International Journal of Engineering and Advanced Technology (IJEAT) is having Online-ISSN 2249-8958, bi-monthly international journal, being published in the months of February, April, June, August, October, and December by Blue Eyes Intelligence Engineering & Sciences Publication (BEIESP) Bhopal (M.P.), India since the year 2011. It is academic, online, open access, double-blind, peer-reviewed international journal. It aims to publish original, theoretical and practical advances in Computer Science & Engineering, Information Technology, Electrical and Electronics Engineering, Electronics and Telecommunication, Mechanical Engineering, Civil Engineering, Textile Engineering and all interdisciplinary streams of Engineering Sciences. All submitted papers will be reviewed by the board of committee of IJEAT. Aim of IJEAT Journal disseminate original, scientific, theoretical or applied research in the field of Engineering and allied fields. dispense a platform for publishing results and research with a strong empirical component. aqueduct the significant gap between research and practice by promoting the publication of original, novel, industry-relevant research. seek original and unpublished research papers based on theoretical or experimental works for the publication globally. publish original, theoretical and practical advances in Computer Science & Engineering, Information Technology, Electrical and Electronics Engineering, Electronics and Telecommunication, Mechanical Engineering, Civil Engineering, Textile Engineering and all interdisciplinary streams of Engineering Sciences. impart a platform for publishing results and research with a strong empirical component. create a bridge for a significant gap between research and practice by promoting the publication of original, novel, industry-relevant research. solicit original and unpublished research papers, based on theoretical or experimental works. Scope of IJEAT International Journal of Engineering and Advanced Technology (IJEAT) covers all topics of all engineering branches. Some of them are Computer Science & Engineering, Information Technology, Electronics & Communication, Electrical and Electronics, Electronics and Telecommunication, Civil Engineering, Mechanical Engineering, Textile Engineering and all interdisciplinary streams of Engineering Sciences. The main topic includes but not limited to: 1. Smart Computing and Information Processing Signal and Speech Processing Image Processing and Pattern Recognition WSN Artificial Intelligence and machine learning Data mining and warehousing Data Analytics Deep learning Bioinformatics High Performance computing Advanced Computer networking Cloud Computing IoT Parallel Computing on GPU Human Computer Interactions 2. Recent Trends in Microelectronics and VLSI Design Process & Device Technologies Low-power design Nanometer-scale integrated circuits Application specific ICs (ASICs) FPGAs Nanotechnology Nano electronics and Quantum Computing 3. Challenges of Industry and their Solutions, Communications Advanced Manufacturing Technologies Artificial Intelligence Autonomous Robots Augmented Reality Big Data Analytics and Business Intelligence Cyber Physical Systems (CPS) Digital Clone or Simulation Industrial Internet of Things (IIoT) Manufacturing IOT Plant Cyber security Smart Solutions – Wearable Sensors and Smart Glasses System Integration Small Batch Manufacturing Visual Analytics Virtual Reality 3D Printing 4. Internet of Things (IoT) Internet of Things (IoT) & IoE & Edge Computing Distributed Mobile Applications Utilizing IoT Security, Privacy and Trust in IoT & IoE Standards for IoT Applications Ubiquitous Computing Block Chain-enabled IoT Device and Data Security and Privacy Application of WSN in IoT Cloud Resources Utilization in IoT Wireless Access Technologies for IoT Mobile Applications and Services for IoT Machine/ Deep Learning with IoT & IoE Smart Sensors and Internet of Things for Smart City Logic, Functional programming and Microcontrollers for IoT Sensor Networks, Actuators for Internet of Things Data Visualization using IoT IoT Application and Communication Protocol Big Data Analytics for Social Networking using IoT IoT Applications for Smart Cities Emulation and Simulation Methodologies for IoT IoT Applied for Digital Contents 5. Microwaves and Photonics Microwave filter Micro Strip antenna Microwave Link design Microwave oscillator Frequency selective surface Microwave Antenna Microwave Photonics Radio over fiber Optical communication Optical oscillator Optical Link design Optical phase lock loop Optical devices 6. Computation Intelligence and Analytics Soft Computing Advance Ubiquitous Computing Parallel Computing Distributed Computing Machine Learning Information Retrieval Expert Systems Data Mining Text Mining Data Warehousing Predictive Analysis Data Management Big Data Analytics Big Data Security 7. Energy Harvesting and Wireless Power Transmission Energy harvesting and transfer for wireless sensor networks Economics of energy harvesting communications Waveform optimization for wireless power transfer RF Energy Harvesting Wireless Power Transmission Microstrip Antenna design and application Wearable Textile Antenna Luminescence Rectenna 8. Advance Concept of Networking and Database Computer Network Mobile Adhoc Network Image Security Application Artificial Intelligence and machine learning in the Field of Network and Database Data Analytic High performance computing Pattern Recognition 9. Machine Learning (ML) and Knowledge Mining (KM) Regression and prediction Problem solving and planning Clustering Classification Neural information processing Vision and speech perception Heterogeneous and streaming data Natural language processing Probabilistic Models and Methods Reasoning and inference Marketing and social sciences Data mining Knowledge Discovery Web mining Information retrieval Design and diagnosis Game playing Streaming data Music Modelling and Analysis Robotics and control Multi-agent systems Bioinformatics Social sciences Industrial, financial and scientific applications of all kind 10. Advanced Computer networking Computational Intelligence Data Management, Exploration, and Mining Robotics Artificial Intelligence and Machine Learning Computer Architecture and VLSI Computer Graphics, Simulation, and Modelling Digital System and Logic Design Natural Language Processing and Machine Translation Parallel and Distributed Algorithms Pattern Recognition and Analysis Systems and Software Engineering Nature Inspired Computing Signal and Image Processing Reconfigurable Computing Cloud, Cluster, Grid and P2P Computing Biomedical Computing Advanced Bioinformatics Green Computing Mobile Computing Nano Ubiquitous Computing Context Awareness and Personalization, Autonomic and Trusted Computing Cryptography and Applied Mathematics Security, Trust and Privacy Digital Rights Management Networked-Driven Multicourse Chips Internet Computing Agricultural Informatics and Communication Community Information Systems Computational Economics, Digital Photogrammetric Remote Sensing, GIS and GPS Disaster Management e-governance, e-Commerce, e-business, e-Learning Forest Genomics and Informatics Healthcare Informatics Information Ecology and Knowledge Management Irrigation Informatics Neuro-Informatics Open Source: Challenges and opportunities Web-Based Learning: Innovation and Challenges Soft computing Signal and Speech Processing Natural Language Processing 11. Communications Microstrip Antenna Microwave Radar and Satellite Smart Antenna MIMO Antenna Wireless Communication RFID Network and Applications 5G Communication 6G Communication 12. Algorithms and Complexity Sequential, Parallel And Distributed Algorithms And Data Structures Approximation And Randomized Algorithms Graph Algorithms And Graph Drawing On-Line And Streaming Algorithms Analysis Of Algorithms And Computational Complexity Algorithm Engineering Web Algorithms Exact And Parameterized Computation Algorithmic Game Theory Computational Biology Foundations Of Communication Networks Computational Geometry Discrete Optimization 13. Software Engineering and Knowledge Engineering Software Engineering Methodologies Agent-based software engineering Artificial intelligence approaches to software engineering Component-based software engineering Embedded and ubiquitous software engineering Aspect-based software engineering Empirical software engineering Search-Based Software engineering Automated software design and synthesis Computer-supported cooperative work Automated software specification Reverse engineering Software Engineering Techniques and Production Perspectives Requirements engineering Software analysis, design and modelling Software maintenance and evolution Software engineering tools and environments Software engineering decision support Software design patterns Software product lines Process and workflow management Reflection and metadata approaches Program understanding and system maintenance Software domain modelling and analysis Software economics Multimedia and hypermedia software engineering Software engineering case study and experience reports Enterprise software, middleware, and tools Artificial intelligent methods, models, techniques Artificial life and societies Swarm intelligence Smart Spaces Autonomic computing and agent-based systems Autonomic computing Adaptive Systems Agent architectures, ontologies, languages and protocols Multi-agent systems Agent-based learning and knowledge discovery Interface agents Agent-based auctions and marketplaces Secure mobile and multi-agent systems Mobile agents SOA and Service-Oriented Systems Service-centric software engineering Service oriented requirements engineering Service oriented architectures Middleware for service based systems Service discovery and composition Service level agreements (drafting,
As of 03/03/2024, the Sevilleta Long-Term Ecological Research Program is equipped with a total of 65 digital RGB cameras, or PhenoCams, across the Sevilleta National Wildlife Refuge. These cameras are installed on eddy covariance flux towers and at a number of precipitation manipulation experiments to track vegetation phenology and productivity across dryland ecotones. PhenoCams have been paired with eddy covariance flux tower data at the site since 2014, while some Mean-Variance Experiment PhenoCams were installed as recently as June 2023. For information on PhenoCam data processing and formatting, see Richardson et al., 2018, Scientific Data (https://doi.org/10.1038/sdata.2018.28), Seyednasrollah et al., 2019, Scientific Data (https://doi.org/10.1038/s41597-019-0229-9), and the PhenoCam Network web page (https://phenocam.nau.edu/webcam/). The PhenoCam Network uses imagery from digital cameras to track vegetation phenology and seasonal changes in vegetation activity in diverse ecosystems across North America and around the world. Imagery is uploaded to the PhenoCam server hosted at Northern Arizona University, where it is made publicly available in near-real time, every 30 minutes from sunrise to sunset, 365 days a year. The data are processed using simple image analysis tools to yield a measure of canopy greenness, from which phenological metrics are extracted, characterizing the start and end of the growing season. These transition dates have been shown to align well with on-the-ground observations at various research sites. Long-term PhenoCam data can be used to track the impact of climate variability and change on the rhythm of the seasons.
https://spdx.org/licenses/CC0-1.0https://spdx.org/licenses/CC0-1.0
This data set contains orthophotos in the vicinity of the EMS tower at Harvard Forest, as well as the flight logs from the unmanned aerial vehicle (UAV) used to obtain the digital images used in orthophoto creation. Orthophotos were created by mosaicking approximately 200 JPEG images from each date of observation. The orthophotos cover the spatial extent of the 250 meter resolution MODIS pixel that contains the EMS tower. Land cover types in the area of photography include deciduous and evergreen forest, and wetlands. The research goal of data collection for this data set was to observe spatial variance in plant phenology. Therefore, photos were taken from before leaf out until after leaf drop. Orthophotos were collected approximately every 5 days during spring and weekly during fall; see filenames for specific dates. The nominal spatial resolution of the orthophotos is 6 cm, however due to various factors including inaccuracy of the onboard GPS, wind-blown motion of trees, the automated orthophoto mosaicking process, and user error in final georeferencing, image analysis has been conducted at 10 m resolution. The orthophotos are available as GeoTIFF files.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
We mapped 66 Ecological Mapping Systems (EMS) for eight coastal counties in south Texas, from Refugio and Aransas County south to the Mexican border. Land cover (LC), geophysical setting information, and woody vegetation height were all attributed to image objects derived from 10 m Sentinel-2 satellite imagery to model EMS type. A supervised process with training data collected from aerial photographs, aided by quantitative, species-specific, ground-collected virtual plot data, was used to classify LC in a RandomForest framework. Out of bag (OOB) error for LC was 15.24%. Recently collected LiDAR point cloud information was used to map height for woody vegetation, and the height was, in turn, used to distinguish between herbaceous, shrubland, and woodland/forest types via modification of LC results, and to define several canopy >10 m versions of forested EMS types. Geophysical settings were mapped based primarily on the distribution of soil Map Units (MUs) from the national digital soil survey (gSSURGO). Elevation and potential ponding information were derived from analysis of LiDAR-derived digital elevation models (DEMs) as an aid in mapping several EMS types. Heads-up modification of both LC and EMS modeling results using aerial photograph interpretation improved results. The agreement between EMS mapped type and field-collected data (most 10 years old or more) was >75%. The most abundant EMS types included Coastal and Sandsheet: Deep Sand Grassland (10.7% of the region), Native Invasive: Mesquite/Mixed Shrubland (5.0%), Gulf Coast: Coastal Prairie (4.6%), and South Texas: Sandy Mesquite Savanna Grassland (4.4%). The improved land cover, geophysical settings data, vegetation height data, and the use of finer-resolution image objects for modeling enabled mapping of all EMS types more accurately than previous datasets. The new EMS dataset will facilitate analysis and conservation of important habitats and modeling of species of concern that are tied to those habitats.
GMS's VISSR data has been received and processed at the Meteorological Satellite Center (MSC), Japan Meteorological Agency (JMA). VISSR data has been stored on magnetic tapes until February 1987. MSC and JWA are now offering this data on cassette magnetic tape. Since 1981, MSC and JWA have been distributing duplicate magnetic tapes and photo products including movies for users on a non real-time basis. retention Type of Data medium period Data Availability
========================================================================
Original film negative film 10 yr Apr 1978 - present Printed positive paper 5 yr Apr 1978 - present Microfilm film perm Apr 1978 - present False color analysis film film 3 yr Apr 1978 - Feb 1987 16mm animation film film 10 yr Apr 1978 - present VTR tape video TP 10 yr Nov 1987 - present
retention
Type of Data medium period Data Availability
========================================================================
IR data MT 5 yr Mar 1981 - Feb 1987 CT 5 yr Mar 1987 - present VIS data MT 5 yr Mar 1981 - Feb 1987 CT 5 yr Mar 1987 - present
retention
VISSR histogram data (IR) MT 10 yr Jun 1982 - Oct 1982
Jun 1984 - Feb 1987
CT 10 yr Mar 1987 - present
VISSR histogram data (VIS) CT 10 yr Mar 1987 - present Cloud grid data CT 10 yr Mar 1987 - present Cloud Amount MT 10 yr Feb 1978 - present Sea Surface Temp (SST) MT 10 yr Feb 1978 - Feb 1987 Surface Temperature MT 10 yr Mar 1987 - present Brightness temp distribution MT 10 yr Mar 1987 - present Cloud motion wind MT 10 yr Apr 1978 - present ISCCP data (B1) MT 10 yr Jul 1983 - present (B2) MT 10 yr Apr 1988 - present GPCP data MT 10 yr Jan 1986 - present SEM data MT 10 yr Apr 1978 - present retention
Cloud amount chart 5 yr Sea Surface Temperature chart 5 yr Surface Temperature chart 5 yr Brightness Temp. distribution chart 5 yr Mar 1987 - present Cloud motion wind chart 5 yr SCIC (Vicinity of Japan) chart 5 yr Mar 1987 - present SCIC (Far East area) chart 5 yr Mar 1987 - present Cloud Analysis Chart chart 5 yr - Feb 1987 SEM chart 5 yr
https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
The digital image processing market is projected to reach a value of XXX million by 2033, expanding at a CAGR of XX% during the forecast period (2025-2033). The market is driven by the increasing demand for digital images in various applications, including healthcare, manufacturing, and entertainment. The availability of advanced image processing algorithms and the proliferation of high-resolution imaging devices are also contributing to the market's growth. The digital image processing market is segmented based on application (medical imaging, remote sensing, industrial inspection, and others) and type (2D, 3D, and 4D). The medical imaging segment holds a significant market share due to the increasing use of medical imaging techniques for disease diagnosis and treatment planning. The industrial inspection segment is also expected to witness significant growth, driven by the increasing demand for automated inspection systems in manufacturing plants. The major companies operating in the digital image processing market include IBM, AWS, Google, Microsoft, Trax, Canon, Casio, Epson, Olympus, and Nikon. North America is expected to dominate the market, followed by Asia Pacific and Europe.