Initial training dataset for DeepLandforms: A Deep Learning Computer Vision toolset applied to a prime use case for mapping planetary skylights
The Planetary Rover Driving Status Anomaly Detection Dataset is generated and provided by the Moon Wreckers Team from Carnegie Mellon University Robotics Institute in the Roverside Assistance Project. This dataset includes 5 ROS bag files collected from the AK1 rover in different status and a ROS bag file collected from the AK2 rover in the stopped status. Topic messages exported into .csv format files are also provided, including (1) the raw rover odometry estimated from wheel encoders, (2) the filtered rover odometry from both wheel encoders and IMUs, and (3) the odometry from VIVE trackers.
U.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
The Unmanned Aerial System (UAS) industry in the United States is still very much in its infancy, but its potential impacts on the geospatial mapping and surveying professions are indisputable.
In future years, requirements for imaging and remote-sensing observations with semi-autonomous operations Unmanned Autonomous Vehicle (UAV) will be key requirements for surveys of other planetary atmospheres and surfaces. In anticipation of these requirements, it is imperative that new technologies with increased automation capability, speed, and accuracy that can be achieved during a single mission are developed, evaluated and implemented.
For this project, a prototype autonomous rover system that provides a framework to collect planetary remotely sensed data and leverage cloud computing services to produce environmental mapping products with that data, was developed and tested.
This innovative technology could potentially support a wide variety of planetary data gathering science missions, while at the same time, offer the flexibility to incorporate additional new techniques that could eventually be applied to swarm rovers that integrate planetary aerial and surface access systems. Additionally, this technology could potentially be used to address SSC related facility monitoring and security issues; such as buffer zone intrusions, and provide support for rapid response capability for both natural and manmade disasters.
In military operations, large remotely piloted UAVs have been successfully deployed for several years. The success in this application has spawned a new area of research - micro-autonomous aerial vehicles (micro-AAVs). Over the past two years, this research area has been exploited by universities, and has resulted in a rich collection of micro-AAVs platforms which range from the small, open-platform system using open source waypoint navigation software; to small, production ready, commercial-off-the-shelf platforms with complex highly intelligent flight management systems. These platforms are capable of supporting a full array of sensors and cameras ranging from high-resolution, true-color, still images to high-resolution real-time video streams. In addition, some platforms are capable of supporting near infrared (NIR) cameras that can be used for Normalized Difference Vegetation Index (NDVI) data products useful for vegetation health monitoring similar to those generated today by our team using Moderate
Resolution Imaging Spectroradiometer (MODIS) satellite data.
Additionally, for over a decade now, rovers have been successfully used on Mars to collect terrestrial close-up imagery and other sensor data. For future lunar and planetary exploratory missions, the development of smaller and more efficient micro-rover platforms have been proposed, and have been prototyped in a variety of forms and locomotive means. For successful and safe exploration of these surfaces, ultra-high resolution terrain and feature data, as well as, a flexible autonomous system to gather and process this data over wide areas will be required.
For this project, the potential of simulating a rover-balloon tethered system, autonomous cloud enabled system, for gathering and processing low altitude high resolution imagery for the purposes of terrain model and thematic data product creation was explored, and demonstrated. The tablet cameras and sensors were used as a proxy for the AAV sensor and image data. A typical limiting factor associated with the small payload of these systems (micro-AAVs) is the computational power that can be deployed on them, which, correspondingly, limits their autonomous capabilities. To increase computational capacity, data was pushed to a cloud location for access by the processing system. Therefore, this project explored using cloud computing to increase its computational capacity on a tablet.
The tablet and commercial off the shelf (COTS) smartphone with camera was able to establish communication with the cloud by tethering to a tablet mobile Wi-Fi hotspot for internet access. The tablet allowed for real-time data processing, analysis, and autonomous flight operations based on those observations.
Therefore, for this project, the effective computational power of these platforms was increased by simulating cloud computing services via a local virtual machine data processing system. Using this Virtual Machine to establish communication with the cloud, the computational capacity of the simulated micro-AAV was augmented and enabled real-time data processing and analysis based on those observations.
Future testing of this data processing flow via a virtual machine could be directly translated to current cloud computing services with little modification, and once implemented could enhance available UAV aerial rapid response platforms capabilities in their ability to respond to natural or manmade disasters.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The data of Solar and Wind collected in the University of Rochester campus.
Dataset Card for S2-100K
The S2-100K dataset is a dataset of 100,000 multi-spectral satellite images sampled from Sentinel-2 via the Microsoft Planetary Computer. Copernicus Sentinel data is captured between Jan 1, 2021 and May 17, 2023. The dataset is sampled approximately uniformly over landmass and only includes images without cloud coverage. The dataset is available for research purposes only. If you use the dataset, please cite our paper. More information on the dataset can… See the full description on the dataset page: https://huggingface.co/datasets/davanstrien/satclip.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Overview
The purpose of this dataset is to train a classifier to detect "dusty" versus "not dusty" patches within browse-resolution HiRISE observations of the Martian surface. Here, "dusty" refers to images in which the view of the surface has been obscured heavily by atmospheric dust.
The dataset contains two sets of 20,000 image patches each from EDR (full resolution) and RDR ("browse" resolution) non-map-projected ("nomap") HiRISE images, with balanced classes. The patches have been split into train (n = 10,000), validation (n = 5,000), and test (n = 5,000) sets such that no two patches from the same HiRISE observation appear in more than one of these subsets. There could be some noise in the labels, but a subset of the validation images have been manually vetted so that label noise rates can be estimated. More details on the dataset creation process are described below.
Generating Candidate Images and Patches
To begin constructing the dataset, the paper "The origin, evolution, and trajectory of large dust storms on Mars during Mars years 24–30 (1999–2011)," by Wang and Richardson (2015), was used to compile a set of time ranges for which global or regional dust storms were known to be occurring on Mars. All HiRISE RDR nomap browse images acquired within these time ranges were then inspected manually to determine sets of images that were (1) almost entirely obscured by dust and (2) almost entirely clear of dust. Then, 10,000 patches from the two subsets of images were extracted to form the "dusty" and "not dusty" classes. The extracted patches are 100-by-100 pixels, which roughly corresponds to the width of one CCD channel within the browse image (the width of the raw EDR data products that are stitched together to form a full RDR image). Some small amount of label noise is introduced in this process, since a patch from a mostly dusty image might happen to contain a clear view of the ground, and a patch from a mostly non-dusty image might contain some dust or regions on the surface that are featureless and appear like dusty patches. A set of "vetting labels" is included, which includes human annotations by the author for a subset of the validation set of patches. These labels can be used to estimate the apparent label noise in the dataset.
Corresponding to the RDR patch dataset, a set of patches are extracted from the same set of EDR images for the "dusty" and "not dusty" classes. EDRs are raw images from the instrument that have not been calibrated or stitched together. To provide some form of normalization, EDR patches are only extracted from the lower half of the EDRs, with the upper half being used to perform a basic calibration of the lower half. Basic calibration is done by subtracting the sample (image column) averages from the upper half to remove "striping," then computing the 0.1th and 99.9th percentiles of the remaining values in the upper half and stretching the image patch to 8-bit integer values [0, 255] within that range. The calibration is meant to implement a process that could be performed onboard the spacecraft as the data is being observed (hence, using the top half of the image acquired first to calibrate the lower half of the image which is acquired later). The full resolution EDRs, which are 1024 pixels wide, are resized down to 100-by-100 pixel patches after being extracted so that they roughly match the resolution of the patches from the RDR browse images.
Archive Contents
The compressed archive file contains two top-level directories with similar contents, "edr_nomap_full_resized" and "rdr_nomap_browse." The first directory contains the dataset constructed from EDR data and the second contains the dataset constructed from RDR data.
Within each directory, there are "dusty" and "not_dusty" directories containing the image patches from each class, "manifest.csv," and "vetting_labels.csv." The vetting labels file contains a list of manually labeled examples, along with the original labels to make it easier to compute label noise rates. The "manifest.csv" file contains a list of every example, its label, and whether it belongs to the train, validation, or test set.
An example ID encodes information about where the patch was sampled from the original HiRISE image. As an example from the RDR dataset, the ID "003100_PSP_004440_2125_r4805_c512" can be broken into several parts:
For the EDR dataset, the ID "200000_PSP_004530_1030_RED7_1_r9153" is broken down as follows:
Original Data
The original HiRISE EDR and RDR data is available via the Planetary Data System (PDS), hosted at https://hirise-pds.lpl.arizona.edu/PDS/
Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
The goal of this project is to create a map of the planet Mars, by using ESRI software. For this, a 3D project was developed using ArcGIS Pro, considering a global scene, to be published in an online platform. All the various data from Mars will be available in a single website, where everyone can visualize and interact. The Red Planet has been studied for many decades and this year marks the launch of a new rover, Mars2020, which will happen on the 17th of July. This new rover will be continuing the on-going work of the Curiosity Rover, launched in 2012. The main objective for these rovers is to determine if Mars could have supported life, by studying its water, climate and geology. Currently, the only operational rover in Mars is Curiosity and with that in mind, this project will have a strong focus on the path taken by this rover, during almost 8 years of exploration. In the web application, the user will be able to see the course taken by Curiosity in Mars’ Gale Crater, from its landing until January 2020. The map highlights several points of interest, such as the location after each year passed on MarsEarth year and every kilometer, which can be interacted with as well as browse through photos taken at each of the locations, through a pop-up window. Additionally, the application also supports global data of Mars. The two main pieces, used as basemaps, are the global imagery, with a pixel size of 925 meters and the Digital Elevation Model (DEM), with 200 meters per pixel. The DEM represents the topography of Mars and was also used to develop Relief and Slope Maps. Furthermore, the application also includes data regarding the geology of the planet and nomenclature to identify regions, areas of interest and craters of Mars. This project wouldn’t have been possible without NASA’s open-source philosophy, working alongside other entities, such as the European Space Agency, the International Astronomical Union and the Working Group for Planetary System Nomenclature. All the data related to Imagery, DEM raster files, Mars geology and nomenclature was obtained on USGS Astrogeology Science Center database. Finally, the data related to the Curiosity Rover was obtained on the portal of The Planetary Society. Working with global datasets means working with very large files, so selecting the right approach is crucial and there isn’t much margin for experiments. In fact, a wrong step means losing several hours of computing time. All the data that was downloaded came in Mars Coordinate Reference Systems (CRS) and luckily, ESRI handles that format well. This not only allowed the development of accurate analysis of the planet, but also modelling the data around a globe. One limitation, however, is that ESRI only has the celestial body for planet Earth, so this meant that the Mars imagery and elevation was wrapped around Earth. ArcGIS Pro allows CRS transformation on the fly, but rendering times were not efficient, so the workaround was to project all data into WGS84. The slope map and respective reclassification and hillshading was developed in the original CRS. This process was done twice: one globally and another considering the Gale Crater. The results show that the crater’s slope characteristics are quite different from the global panorama of Mars. The crater has a depression that is approximately 5000 meters deep, but at the top it’s possible to identify an elevation of 750 meters, according to the altitude system of Mars. These discrepancies in a relatively small area result in very high slope values. Globally, 88% of the area has slopes less than 2 degrees, while in the Gale Crater this value is only 36%. Slopes between 2 and 10 degrees represent almost 60% of the area of the crater. On the other hand, they only represent 10% of the area globally. A considerable area with more than 10 degrees of slope can also be found within the crater, but globally the value is less than 1%. By combining Curiosity’s track path with the DEM, a profile graph of the path was obtained. It is possible to observe that Curiosity landed in a flat area and has been exploring in a “steady path”. However, in the last few years (since the 12th km), the rover has been more adventurous and is starting to climb the crater. In the last 10 km of its journey, Curiosity “climbed” around 300 meters, whereas in the first 11 km it never went above 100 meters. With the data processed in the WGS84 system, all was ready to start modelling Mars, which was firstly done in ArcGIS Pro. When the data was loaded, symbology and pop-ups configured, the project was exported to ArcGIS Online. Both the imagery and elevation layer were exported as “hosted tile service”. This was a key step, since keeping the same level of detail online and offline would have a steep increase in imagery size, to hundreds of Terabytes, thus a lot of work was put into balancing tile cache size and the intended quality of imagery. For the remaining data, it was a straight-forward step, exporting these files as vectors. Once all the data was in the Online Portal, a Global Web Scene was developed. This is an on-going project with an outlook to develop the global scene into an application with ESRI’s AppBuilder, allowing the addition of more information. In the future, there is also interest to increment the displayed data, like adding the paths taken by other rovers in the past, alongside detailed imagery of other areas beyond the Gale Crater. Finally, with 2021 being the year when the new rover Mars2020 will land on the Red Planet, we might be looking into adding it to this project.https://arcg.is/KuS4r
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is primary and secondary event record file of LSDO dataset. Please see reference link to get access to full dataset with solar images.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The formation of large scale structures in three-dimensional (3D) turbulent flows. How small-scale dynamics organize in turbulent flows to grow large scale coherent circulation? is at the heart of fundamental studies in fluid dynamics. It appears to be equally important for our understanding of atmospheric dynamics, oceanography, meteorology and more generally geophysical fluid dynamics. Here, we deliver a data collection that (1) gathers measurements of 3D turbulent flows that emulate planetary atmospheres of the gas giants. Turbulent flows are explored using three different approaches, laboratory experiments, numerical simulations and direct planetary observations. All data set are computed in order to easily extract flow properties, i.e. high resolution maps of the different velocity components and flow vorticity (useful for further diagnostic). The data collected are fully discribed in Cabanes et al GRL (2020) "Revealing the intensity of turbulent energy transfer in planetary atmospheres" and can be used to compute (2) theoretical diagnostics with the numerical codes that allow to reveal the physical meaning of flow measurements. Numerical codes are available on https://github.com/scabanes
We deliver (1) data collection and (2) numerical codes in the following files attached:
(1) Data collection:
(2) Numerical codes:
The purpose of this data collection is to reveal statistical properties of planetary flows. By computing the same analysis on different data sets the researcher allows direct confrontation of planetary observations with idealized laboratory and numerical models. Idealized models are specially designed to sweep on a large array of parameters in order to understand what parameters control planetary global circulation. The data collected and generated by the researcher deliver (1) velocity measurements of 3D turbulent flows using the different approaches (observations-laboratory-numerics) and (2) guidelines to compute the appropriate statistical analysis through the PTST. Here, the ground-breaking novelty is that the researcher deliver the possibility to compute statistical diagnostics adapted to the different geometries: the spherical geometry of planetary flows, i.e. 2D latitude-longitude maps, the cylindrical geometry of laboratory experiments, i.e. 2D flows in a rotating cylindrical tank, and the Cartesian geometry of idealized numerical simulations. Indeed, the math behind each statistical diagnostics must account for the different geometrical configurations in order to properly confront the different approaches. The PTST is also designed to be easily re-used by different communities such as experimentalists, numericists and atmosphericists that deal with 3D or 2D turbulent flows.
Acknowledgments
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement N° 797012.
OpenMindat: Quickly Retrieve Datasets from the 'mindat.org' API 'Mindat' ('mindat.org') is one of the world's most widely used databases of mineral species and their distribution. Many scientists in mineralogy, geochemistry, petrology, and other Earth and planetary disciplines have been using the 'Mindat' data. Still, an open data service and the machine interface have never been fully established. To meet the overwhelming data needs, the 'Mindat' team has built an API () for data access.'OpenMindat' R package provides valuable functions to bridge the data highway, connecting users' data requirements to the 'Mindat' API server and assist with retrieval and initial processing to improve efficiency further and lower the barrier of data query and access to scientists. 'OpenMindat' provides friendly and extensible data retrieval functions, including the subjects of geomaterials (e.g., rocks, minerals, synonyms, variety, mixture, and commodity), localities, and the IMA (International Mineralogical Association)-approved mineral list. 'OpenMindat' R package will accelerate the process of data-intensive studies in mineral informatics and lead to more scientific discoveries.
F M Computer Planet C A Company Export Import Records. Follow the Eximpedia platform for HS code, importer-exporter records, and customs shipment details.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Wild Planet's Spy Video TRAKR archive. As of the creation of this text file, these are the only Spy Video Trakr programming documentation that is left on the Wild planet webiste http://dev.spygear.net/help/files/ yet there are still TRAKR owners who are interested in programming their TRAKRs and I have archived the documentation here for them to use (in case the above URL goes dark).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data release for Corrales, L., Gavilan, L., Teal, D. J., Kempton, E. M.-R., 2023, ApJL, in press
Provides the optical constants from tholins grown in the laboratory (Gavilan et al. 2017, 2018) and computed cross-sections (Mie) for a wide range of particle sizes, for wavelengths of 0.13-10 micron. Python libraries and code for reproducing this work are provided. A static, refactored version of Exo_Transmit (Kempton et al. 2017, Teal et al. 2022, Corrales et al. 2023) is also provided for computing exoplanet transmission spectra with the new tholin species.
See README file for a full list of contents and instructions for use.
The C2S-MS Floods Dataset is a dataset of global flood events with labeled Sentinel-1 & Sentinel-2 pairs. There are 900 sets (1800 total) of near-coincident Sentinel-1 and Sentinel-2 chips (512 x 512 pixels) from 18 global flood events. Each chip contains a water label for both Sentinel-1 and Sentinel-2, as well as a cloud/cloud shadow mask for Sentinel-2. The dataset was constructed by Cloud to Street in collaboration with and funded by the Microsoft Planetary Computer team.
The USGS Publications Warehouse is a citation clearinghouse provides access to over 120,000 publications written by USGS scientists over the century-plus history of the bureau. Since June 2003, all USGS series publications published after that date are available digitally through the USGS web. Since 2009, all scholarly publications authored by USGS staff including those published outside the USGS (scholarly journals, university presses, etc.) are cataloged with links to original published sources. The USGS Publications Warehouse is managed and operated as part of the USGS Libraries Program. ScienceBase harvests all records nightly from the Publications Warehouse via a web service interface that provides original records in the Metadata Object Description Standard (MODS) XML format. These records are ingested into a ScienceBase collection in order to make them available to the ScienceBase community of users and provide alternate methods of access, including geospatial services that expose those publications that have been documented with a spatial context. This collection record provides those interface options along with links to the Publications Warehouse web site and search system, the primary mode of access to these resources.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Urania trajectory for the three flybys of Umbriel in ascii format. In increasing order of column number are the Coordinated Universal Time (UTC), seconds past J2000, spacecraft position (x, y, and z) with respect to the center of Uranus in the IAU Uranus frame in units of Uranus radii 25,559 km, and spacecraft position (x,y, and z) with respect to the center of Umbriel in the IAU Umbriel frame in units of Umbriel radii 584.7 km.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This results from a prototype change alert system (Bunting et al., 2023) that has been developed to identify mangrove losses on a monthly basis. Implemented on the Microsoft Planetary Computer, the Global Mangrove Watch v3.0 mangrove baseline extent map (Bunting et al., 2022) for 2018 was refined and used to define the mangrove extent mask under which potential losses would be identified. The study period was 2018-2022 due to the availability of the Copernicus Sentinel-2 imagery used for the study. The alert system is based on optimised NDVI thresholds used to identify mangrove losses and a temporal scoring system used to filter false positives. The alert system was found to have an estimated overall accuracy of 92.1 %, with the alert commission and omission estimated to be 10.4 % and 20.6 %, respectively. The alert system is presently limited to Africa, where significant losses were identified in the study period, with 90 % of the loss alerts identified in Nigeria, Guinea-Bissau, Madagascar, Mozambique and Guinea. The drivers of those losses vary, with West Africa primarily driven by economic activities such as agricultural conversion and infrastructure development. At the same time, East Africa is dominated by climatic drivers, primarily storm frequency and intensity. Production of the monthly loss alerts for Africa will be continued as part of the wider Global Mangrove Watch project, and the spatial coverage is expected to be expanded over the coming months and years. Future updates of the mangrove loss alerts will be via the Global Mangrove Watch portal: https://www.globalmangrovewatch.org
https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
The global satellite imagery and image processing services market is experiencing robust growth, driven by increasing demand across diverse sectors. The market, estimated at $15 billion in 2025, is projected to expand at a Compound Annual Growth Rate (CAGR) of 7% from 2025 to 2033, reaching approximately $25 billion by 2033. This expansion is fueled by several key factors. Firstly, advancements in satellite technology are providing higher-resolution imagery with improved accuracy and faster processing times, enabling more detailed analysis for various applications. Secondly, the rising adoption of cloud-based platforms for image processing and analytics is streamlining workflows and reducing costs for users. This is particularly crucial for smaller businesses and organizations that previously lacked access to sophisticated image processing capabilities. Thirdly, the growing need for precise geographical information across diverse sectors, including environmental monitoring, precision agriculture, urban planning, and disaster response, fuels market demand. The defense and security sector remains a significant contributor, with increasing reliance on satellite imagery for intelligence gathering and surveillance. Market segmentation reveals significant opportunities within specific application areas. The environmental sector, utilizing satellite imagery for deforestation monitoring, climate change analysis, and pollution detection, is a rapidly growing segment. Similarly, the energy and power sector leverages satellite imagery for pipeline monitoring, renewable energy resource assessment, and infrastructure management. Within image processing types, the demand for advanced data analytics is soaring, with growing adoption of artificial intelligence and machine learning for automated feature extraction and predictive analysis. While regulatory hurdles and the high initial investment cost of satellite technologies pose some challenges, the overall market outlook remains positive, driven by technological advancements, increasing data accessibility, and rising demand for location-based intelligence. Competition is intensifying amongst established players and new entrants, leading to innovation and affordability in the market.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Stardust@home: AVE, Composite Reliability, square-root of AVE (on diagonal; bold) and correlation between the latent constructs.
Initial training dataset for DeepLandforms: A Deep Learning Computer Vision toolset applied to a prime use case for mapping planetary skylights