ACCEPT consists of an overall software infrastructure framework and two main software components. The software infrastructure framework consists of code written to pre-process data, pass information between the two main software components, learn models that will be shared by nearly all of the elements in one of the two software components (which will require calling third party open source software modules), and select which element/method should be used in each one of the two main software components. The two main software components can use interchangeable software elements that enable the regression and detection functionality. Some software elements are distributed with the initial release, while others need to be called separately as independent third party elements that have been open sourced already.
Swim is a software information service for the grid built on top of Pour, which is an information service framework developed at NASA. Swim provides true software resource discovery integrated with the tools used by administrators to install software.
FISH_NON_NATIVE_ARC: This feature class is used to determine whether a particular non-native fish species is present in a stream, which can affect protection buffer widths in land management plans and activities. The data comes primarily from the Hydro Update Project, and is from actual observations or from modeled fish presence. While each district carried out their determinations in various ways, generally verification means that the fish biologist or other trained specialist was able to see the fish at the stream.
ARC can be seen as a general artificial intelligence benchmark, as a program synthesis benchmark, or as a psychometric intelligence test. It is targeted at both humans and artificially intelligent systems that aim at emulating a human-like form of general fluid intelligence.
To use this dataset:
import tensorflow_datasets as tfds
ds = tfds.load('arc', split='train')
for ex in ds.take(4):
print(ex)
See the guide for more informations on tensorflow_datasets.
This is a tiled collection of the 3D Elevation Program (3DEP) and is 1/9 arc-second (approximately 3 m) resolution.The 3DEP data holdings serve as the elevation layer of The National Map, and provide foundational elevation information for earth science studies and mapping applications in the United States. Scientists and resource managers use 3DEP data for hydrologic modeling, resource monitoring, mapping and visualization, and many other applications. The elevations in this DEM represent the topographic bare-earth surface. The seamless 1/9 arc-second DEM layers are derived from diverse source data that are processed to a common coordinate system and unit of vertical measure. These data are distributed in geographic coordinates in units of decimal degrees, and in conformance with the North American Datum of 1983 (NAD 83). All elevation values are in meters and, over the continental United States, are referenced to the North American Vertical Datum of 1988 (NAVD88). The seamless 1/9 arc-second DEM layer project-based coverage for portions of the conterminous United States, limited areas of Alaska, and Guam. The seamless 1/9 arc-second NED layer is available as pre-staged products tiled in 15 minute blocks in Erdas .img format. Since 2015, the seamless 1/9 arc-second DEM layer is no longer being updated. Other 3DEP products are nationally seamless DEMs in resolutions of 1/3, 1, and 2 arc seconds. These seamless DEMs were referred to as the National Elevation Dataset (NED) from about 2000 through 2015 at which time they became the seamless DEM layers under the 3DEP program and the NED name and system were retired. Other 3DEP products include one-meter DEMs produced exclusively from high resolution light detection and ranging (lidar) source data and five-meter DEMs in Alaska as well as various source datasets including the lidar point cloud and interferometric synthetic aperture radar (Ifsar) digital surface models and intensity images. All 3DEP products are public domain.
Mutil provides mcp and msum, which are drop-in replacements for cp and md5sum that utilize multiple types of parallelism to achieve maximum copy and checksum performance on clustered file systems.
These feature classes represent the spatial extent and boundaries of BLM National Landscape Conservation System (NLCS) Other Related Lands within the BLM Administrative State of Utah. These lands include those that are not in Wilderness or Wilderness Study Areas, but have been determined to have wilderness character through inventory or land use planning. These lands fall into one of two categories. The first category are lands with "wilderness value and characteristics". These are inventoried areas not in Wilderness or Wilderness Study Areas that have been determined to meet the size, naturalness, and the outstanding solitude and/or the outstanding primitive and unconfined recreation criteria. The second category are "wilderness characteristic protection areas". These are former lands with "wilderness value and characteristics" where a plan decision has been made to protect them. Quality control is conducted annually.Complete metadata for these data sets can be found at:BLM UT Other Related Lands (Arc)BLM UT Other Related Lands (Polygon)
The layers within this feature service represent the spatial extent and boundaries of the segments designated as BLM National Conservation Lands (NCL) Wild and Scenic Rivers in Utah. The attributes for this feature class serve to store the feature level metadata information for the polylines, as well as document the origin and characteristics of each polyline. Lines will be segmented when any of the attributes change (e.g. when the classification changes) or to capture changes in Outstandingly Remarkable Value (ORV). Every segment must have at least one record in the related table, nlcs_wsr_orv_tbl. WSRs edited pursuant to: S. 47: John D. Dingell, Jr. Conservation, Management, and Recreation Act Public Law 116-9, March 12, 2019 Data within these services are a live copy of BLM Utah's enterprise production environment. Quality control is conducted annually.Complete metadata for these data sets can be found at:BLM UT Designated Wild and Scenic Rivers (Arc)BLM UT Designated Wild and Scenic River Corridors (Arc)BLM UT Designated Wild and Scenic River Corridors (Polygon)
Catalog of Arc-Grid based derivaitive of SRTM 3-arc second Version 2 DEM for Africa, seamless tiled compilation with ocean and terrestrial void areas set to null SRTM-SWBD 1-arc second mask. SRTM is Shuttle Radar Topography Mission; DEM is Digital Elevation Model. The SRTM-3AS_IMGCAT_NULL Image data layer is comprised of 3204 derivative calculated seamless image catalog features derived based on 0.000833_ data originally from FAO.
FISH_ANADROMOUS_ARC: This feature class is used to determine whether a particular anadromous fish species is present in a stream, which can affect protection buffer widths in land management plans and activities. The data comes primarily from the National Hydro Dataset (NHD), and is from actual observations or from modeled fish presence. While each district carried out their determinations in various ways, generally verification means that the fish biologist or other trained specialist was able to see the fish at the stream. This data is a descendent of the Aquatic Resources Information Management System (ARIMS).
The layers within this feature service show the spatial extent and boundaries of BLM National Conservation Lands (NCL) Wilderness Areas in Utah. Per legislative authority, this data is under review for “clerical and typographical” errors. Boundary lines may not accurately align with features that currently exist on the ground, such as designated roads. Some features may be adjusted pursuant to: S. 47: John D. Dingell, Jr. Conservation, Management, and Recreation Act, Public Law 116-9, March 12, 2019. Data within these services are a live copy of BLM Utah's enterprise production environment. Quality control is conducted annually.Complete metadata for these data sets can be found at:BLM UT Designated Wilderness (Arc)BLM UT Designated Wilderness (Polygon)
Pathdroid is a framework to analyze binary Android applications for program defects and malicious behaviors. Pathdroid is based on the Java Pathfinder (JPF) verification system (http://babelfish.arc.nasa.gov/trac/jpf), and thus provides model checking capabilities for applications that are distributed as standard Android .dex or .apk files.
The layers within this feature service represent the spatial extent and boundaries of BLM Grazing Allotments and Pastures in Utah. Data within these services are a live copy of BLM Utah's enterprise production environment. Quality control is conducted annually.Complete metadata for these data sets can be found at:BLM UT Grazing Pastures (Arc)BLM UT Grazing Pastures (Polygon)BLM UT Grazing Allotments (Polygon)
Growler is a C++-based distributed object and event architecture. It is written in C++, and supports serialization of C++ objects as part of its Remote Method Invocation, Event Channels, and in its Interface Definition Language. Its primary application has been in support of interactive, distributed visualization, computational steering, and concurrent visualization, but it is a general purpose system for distributed programming.
Save is a framework for implementing highly available network-accessible services. Save consists of a command-line utility and a small set of extensions for the existing Mon monitoring utility. Mon is a flexible command scheduler that has the ability to take various actions (called 'alerts') depending on the exit conditions of the periodic commands (called 'monitors') it executes. Save provides a set of monitors and alerts that execute within the Mon scheduler.
Libibvpp is a C++ wrapper around libibverbs, which is part of the OpenFabrics software suite (www.openfabrics.org).
Catalog of Arc-Grid based derivaitive of SRTM 3-arc second Version 2 DEM for Africa, seamless tiled compilation with oceans set to null using SRTM-SWBD 1-arc second mask and and terrestrial void areas backfilled with SRTM-GTopo30 DEM. SRTM is Shuttle Radar Topography Mission; DEM is Digital Elevation Model; GTopo30 is Global Topographic 30 arc second DEM database, nominal 1km postings. The SRTM-3AS_IMGCAT_FILLED Image data layer is comprised of 3204 derivative calculated seamless image catalog features derived based on 0.000833_ data originally from FAO.
Base waterbody polygon arcs are feature type, source and capture date attributed arcs that make up the boundaries of base waterbody polygons in the Base Features Hydrography geospatial dataset. These arcs were collected from conversion processes of 1:20 000, 1:50 000 and AVI Provincial mapping datasets and 1:50 000 National Topographic Data Base (NTDB).
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
Dataset Card for "ai2_arc"
Dataset Summary
A new dataset of 7,787 genuine grade-school level, multiple-choice science questions, assembled to encourage research in advanced question-answering. The dataset is partitioned into a Challenge Set and an Easy Set, where the former contains only questions answered incorrectly by both a retrieval-based algorithm and a word co-occurrence algorithm. We are also including a corpus of over 14 million science sentences relevant to… See the full description on the dataset page: https://huggingface.co/datasets/allenai/ai2_arc.
The CFD Utility Software Library consists of nearly 30 libraries of Fortran 90 and 77 subroutines and almost 100 applications built on those libraries. Many of the utilities apply to multiblock structured grids and flow solutions, but numerous other reusable modules in such categories as interpolation, optimization, quadrature, rapid searching, and character manipulation appear from several decades of software development in the Aerodynamics Division and Space Technology Division at NASA Ames Research Center.
ACCEPT consists of an overall software infrastructure framework and two main software components. The software infrastructure framework consists of code written to pre-process data, pass information between the two main software components, learn models that will be shared by nearly all of the elements in one of the two software components (which will require calling third party open source software modules), and select which element/method should be used in each one of the two main software components. The two main software components can use interchangeable software elements that enable the regression and detection functionality. Some software elements are distributed with the initial release, while others need to be called separately as independent third party elements that have been open sourced already.