Digital flood-inundation map libraries for two reaches that comprise 14.8 miles of the Little and Big Papillion Creeks in Omaha, Nebraska were created by the U.S. Geological Survey (USGS) in cooperation with the Papio-Missouri River Natural Resource District. The flood-inundation maps, which can be accessed through the USGS Flood Inundation Mapping Program website at https://www.usgs.gov/mission-areas/water-resources/science/flood-inundation-mapping-fim-program, depict estimates of the areal extent and depth of flooding corresponding to selected water levels (stages) at the USGS streamgages Little Papillion Creek at Irvington, Nebr. (station 06610750), Little Papillion Creek at Ak-Sar-Ben at Omaha, Nebr. (station 06610765), and Big Papillion Creek at Q Street at Omaha, Nebr. (station 06610770). Near-real-time stages at these streamgages may be obtained from the USGS National Water Information System database at https://doi.org/10.5066/F7P55KJN or from the National Weather Service Advanced Hydrologic Prediction Service at https://water.weather.gov/ahps/. Flood profiles were computed using hydraulic models for two different stream reaches that comprised 14.8 miles of stream length of the Little and Big Papillion Creeks in Omaha. The models were calibrated by adjusting roughness coefficients to best represent the current (2022) stage-streamflow relation at the streamgages within the study reach. The hydraulic models were then used to compute water-surface profiles at 1-foot (ft) stage intervals at selected stage ranges to represent various flooding scenarios at the streamgages in the reach. The simulated water-surface profiles then were combined using a geographic information system with a digital elevation model, which had a 10-ft grid to delineate the area flooded and water depths at each stage. Along with the inundated area maps, polygon shapefiles of areas behind the levees were created to display the uncertainty of these areas if a levee breach were to occur. These 'areas of uncertainty' files have '_breach' appended to the file names in the data release. The availability of these maps, along with information regarding current stage from USGS streamgages, will provide emergency management personnel and residents with information that is critical for flood response activities such as evacuations and road closures, as well as for post-flood recovery efforts.
The nature of the mapping process that imbues number symbols with their numerical meaning—known as the “symbolgrounding process”—remains poorly understood and the topic of much debate. The aim of this study was to enhance insight into how the nonsymbolic–symbolic number mapping process and its neurocognitive correlates might differ between small (1–4; subitizing range) and larger (6–9) numerical ranges. Hereto, 22 young adults performed a learning task in which novel symbols acquired numerical meaning by mapping them onto nonsymbolic magnitudes presented as dot arrays (range 1–9). Learning-dependent changes in accuracy and RT provided evidence for successful novel symbol quantity mapping in the subitizing (1–4) range only. Corroborating these behavioral results, the number processing related P2p component was only modulated by the learning/ mapping of symbols representing small numbers 1–4. The symbolic N1 amplitude increased with learning independent of symbolic numerical range but dependent on the set size of the preceding dot array; it only occurred when mapping on one to four item dot arrays that allow for quick retrieval of a numeric value, on the basis of which, with learning, one could predict the upcoming symbol causing perceptual expectancy violation when observing a different symbol. These combined results suggest that exact nonsymbolic–symbolic mapping is only successful for small quantities 1–4 from which one can readily extract cardinality.Furthermore, we suggest that the P2p reflects the processing stage of first access to or retrieval of numeric codes and might in future studies be used as a neural correlate of nonsymbolic–symbolic mapping/symbol learning.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Tone mapping operators (TMO) are functions that map high dynamic range (HDR) images to a standard dynamic range (SDR), while aiming to preserve the perceptual cues of a scene that govern its visual quality. Despite the increasing number of studies on quality assessment of tone mapped images, current subjective quality datasets have relatively small numbers of images and subjective opinions. Moreover, existing challenges in transferring laboratory experiments to crowdsourcing platforms put a barrier for collecting large-scale datasets through crowdsourcing.
We address these challenges and propose the RealVision-TMO (RV-TMO), a large-scale tone mapped image quality dataset. RV-TMO contains 250 unique HDR images, their tone mapped versions obtained using four TMOs and pairwise comparison results from seventy unique observers for each pair.
This dataset is published as part of the Journal paper titled as " RV-TMO: Large-Scale Dataset for Subjective Quality Assessment of Tone Mapped Images". If you are using this dataset in your work, please cite the paper below:
@ARTICLE{9872141,
author={Ak, Ali and Goswami, Abhishek and Hauser, Wolf and Le Callet, Patrick and Dufaux, Frederic},
journal={IEEE Transactions on Multimedia},
title={RV-TMO: Large-Scale Dataset for Subjective Quality Assessment of Tone Mapped Images},
year={2022},
pages={1-12},
doi={10.1109/TMM.2022.3203211}}
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Abstract: Direct numerical simulation (DNSs) are used to systematically investigate applicability of minimal channel approach for characterization of roughness-induced drag in irregular rough surfaces. Roughness is generated mathematically using a random algorithm, in which the power spectrum (PS) and probability density function (PDF) of surface height function can be prescribed. 12 different combinations of PS and PDF are examined and both transitionally and fully rough regimes are investigated (roughness heights varies in the range $k^+$ = 25 -- 100). It is demonstrated that both roughness function ($\Delta U^+$) and zero-plane displacement can be predicted within $\pm5\%$ accuracy using DNS in properly sized minimal channels. Notably, the predictions do not deteriorate when a limited range of large horizontal roughness scales are filtered out due to the small channel size (here up to 10\% of original roughness height spectral energy based on 2D PS). Additionally, examining the results obtained from different random realizations of roughness shows that a certain combination of PDF and PS leads to a nearly unique $\Delta U^+$ for deterministically different surface topographies. In addition to the global flow properties, the distribution of time-averaged surface force exerted by the roughness onto the fluid is calculated and compared for different cases. It is shown that patterns of surface force distribution over irregular rough surfaces can be well captured when the sheltering effect is taken into account. This is made possible applying the sheltering model proposed by Yang et al. to each specific roughness topography. Furthermore, an analysis of the coherence function between roughness height and surface force distributions reveals that the coherence drops at larger streamwise wavelengths, which can be an indication that very large horizontal scales are less dominant in contributing to the skin friction drag. Finally, some existing roughness correlations are assessed using the present roughness dataset, and it is shown that the correlation predictions for the values of equivalent sand-grain roughness mainly lie within $\pm30\%$ error in comparison to the DNS results. TechnicalRemarks: These files contain the data used in the publication: “DNS-based characterization of pseudo-random roughness in minimal channels” J. Yang, A. Stroh, D. Chung and P. Forooghi published in Journal of Fluid Mechanics doi:10.1017/jfm.2022.331 Numerical Details: The carried out DNS is based on a pseudo-spectral solver for incompressible boundary layer flows developed at KTH/Stockholm. The Navier-Stokes equations are numerically integrated using the velocity-vorticity formulation by a spectral method with Fourier decomposition in the horizontal directions and Chebyshev discretization in the wall-normal direction. For temporal advancement, the convection and viscous terms are discretized using the 3rd order Runge-Kutta and Crank-Nicolson methods, respectively. The simulation domain represents an turbulent channel flow with periodic boundary conditions applied in streamwise and spanwise directions, while the wall-normal extension of the domain is bounded by no-slip boundary conditions at the upper and lower domain wall. The flow is driven by a prescribed constant pressure gradient (CPG). The friction Reynolds number for the present case is fixed to Re_τ = 500. The structured surface is introduced through an immersed boundary method (IBM) based on the method proposed by Goldstein et al. (1993) and is essentially a proportional controller which imposes zero velocity in the solid region of the numerical domain. Data Files: The data files are saved and labeled in *.mat files. Each file contains MATLAB data consisting of the roughness height distribution and corresponding coordinates. The roughness structures are non-dimensionalized with the channel half height δ. Reference: Please provide a reference to the article above when using this data. Please direct questions regarding numerical setup/data to Jiasheng Yang (jiasheng.yang@kit.edu)
We present the most extensive catalog of exposures of volatiles on the 67P/Churyumov-Gerasimenko nucleus generated from observations acquired with the Optical, Spectroscopic, and Infrared Remote Imaging System (OSIRIS) on board the Rosetta mission. We investigate the volatile exposure distribution across the nucleus, their size distribution, and their spectral slope evolution. We analyzed medium- and high-resolution images acquired with the Narrow Angle Camera (NAC) of OSIRIS at several wavelengths in the 250-1000nm range, investigating images from 109 different color sequences taken between August 2014 and September 2016, and covering spatial resolution from a few m/px to 0.1m/px. To identify the icy bright spots, we adopted the following criteria: i) they should be at least 50% brighter than the comet dark terrain; ii) they should have neutral to moderate spectral slope values in the visible range (535-882nm); iii) they should be larger than 3 pixels. We identified more than 600 volatile exposures on the comet, and we analyzed them in a homogeneous way. Bright spots are found isolated on the nucleus or grouped in clusters, usually at the bottom of cliffs, and most of them are small, typically a few square meters or smaller. The isolated ones are observed in different types of morphological terrains, including smooth surfaces, on top of boulders, or close to irregular structures. Several of them are clearly correlated with the cometary activity, being the sources of jets or appearing after an activity event. We note a number of peculiar exposures of volatiles with negative spectral slope values in the high-resolution post-perihelion images, which we interpret as the presence of large ice grains (>1000m) or local frosts condensation. We observe a clear difference both in the spectral slope and in the area distributions of the bright spots pre- and postperihelion, with these last having lower average spectral slope values and a smaller size, with a median surface of 0.7m^2^, even if the size difference is mainly due to the higher resolution achieved post-perihelion. The minimum duration of the bright spots shows three clusters: an area-independent cluster dominated by short-lifetime frosts; an area-independent cluster with lifetime of 0.5-2 days, probably associated with the seasonal fallout of dehydrated chunks; and an area-dependent cluster with lifetime longer than 2 days consistent with water-driven erosion of the nucleus. Even if numerous bright spots are detected, the total surface of exposed water ice is less than 50000m^2^, which is 0.1% of the total 67P nucleus surface. This confirms that the surface of comet 67P is dominated by refractory dark terrains, while exposed ice occupies only a tiny fraction. High spatial resolution is mandatory to identify ice on cometary nuclei surfaces. Moreover, the abundance of volatile exposures is six times less in the small lobe than in the big lobe, adding additional evidence to the hypothesis that comet 67P is composed of two distinct bodies. The fact that the majority of the bright spots identified have a surface lower than 1m^2^ supports a model in which water ice enriched blocks (WEBs) of 0.5-1m size should be homogeneously distributed in the cometary nucleus embedded in a refractory matrix.
Passive galaxies at high redshift are much smaller than equally massive early types today. If this size evolution is caused by stochastic merging processes, then a small fraction of the compact galaxies should persist until today. Up to now it has not been possible to systematically identify the existence of such objects in Sloan Digital Sky Survey (SDSS). We aim at finding potential survivors of these compact galaxies in SDSS, as targets for more detailed follow-up observations. From the virial theorem, it is expected that for a given mass, compact galaxies have stellar velocity dispersion higher than the mean owing to their smaller sizes. Therefore velocity dispersion, coupled with size (or mass), is an appropriate method of selecting relics, independent of the stellar population properties. Based on these considerations, we designed a set of criteria the use the distribution of early-type galaxies from SDSS on the log_10_(R_0_)-log_10_({sigma}0) plane to find the most extreme objects on it. We thus selected compact massive galaxy candidates by restricting them to high velocity dispersions {sigma}0>323.2km/s and small sizes R_0_<2.18kpc. We find 76 galaxies at 0.05<z<0.2, which have properties that are similar to the typical quiescent galaxies at high redshift. We discuss how these galaxies relate to average present-day early-type galaxies. We study how well these galaxies fit on well-known local universe relations of early-type galaxies, such as the fundamental plane, the red sequence, or mass-size relations. As expected from the selection criteria, the candidates are located in an extreme corner of the mass-size plane. However, they do not extend as deeply into the so-called zone of exclusion as some of the red nuggets found at high redshift, since they are a factor 2-3 less massive on a given intrinsic scale size. Several of our candidates are close to the size resolution limit of SDSS, but are not so small that they are classified as point sources. We find that our candidates are systematically offset on a scaling relation compared to the average early-type galaxies, but still within the general range of other early-type galaxies. Furthermore, our candidates are similar to the mass-size range expected for passive evolution of the red nuggets from their high redshift to the present. The 76 selected candidates form an appropriate set of objects for further follow-up observations. They do not constitute a separate population of peculiar galaxies, but form the extreme tail of a continuous distribution of early-type galaxies. We argue that selecting a high-velocity dispersion is the best way to find analogues of compact high redshift galaxies in the local universe.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The USDA Agricultural Research Service (ARS) recently established SCINet , which consists of a shared high performance computing resource, Ceres, and the dedicated high-speed Internet2 network used to access Ceres. Current and potential SCINet users are using and generating very large datasets so SCINet needs to be provisioned with adequate data storage for their active computing. It is not designed to hold data beyond active research phases. At the same time, the National Agricultural Library has been developing the Ag Data Commons, a research data catalog and repository designed for public data release and professional data curation. Ag Data Commons needs to anticipate the size and nature of data it will be tasked with handling.
The ARS Web-enabled Databases Working Group, organized under the SCINet initiative, conducted a study to establish baseline data storage needs and practices, and to make projections that could inform future infrastructure design, purchases, and policies. The SCINet Web-enabled Databases Working Group helped develop the survey which is the basis for an internal report. While the report was for internal use, the survey and resulting data may be generally useful and are being released publicly.
From October 24 to November 8, 2016 we administered a 17-question survey (Appendix A) by emailing a Survey Monkey link to all ARS Research Leaders, intending to cover data storage needs of all 1,675 SY (Category 1 and Category 4) scientists. We designed the survey to accommodate either individual researcher responses or group responses. Research Leaders could decide, based on their unit's practices or their management preferences, whether to delegate response to a data management expert in their unit, to all members of their unit, or to themselves collate responses from their unit before reporting in the survey.
Larger storage ranges cover vastly different amounts of data so the implications here could be significant depending on whether the true amount is at the lower or higher end of the range. Therefore, we requested more detail from "Big Data users," those 47 respondents who indicated they had more than 10 to 100 TB or over 100 TB total current data (Q5). All other respondents are called "Small Data users." Because not all of these follow-up requests were successful, we used actual follow-up responses to estimate likely responses for those who did not respond.
We defined active data as data that would be used within the next six months. All other data would be considered inactive, or archival.
To calculate per person storage needs we used the high end of the reported range divided by 1 for an individual response, or by G, the number of individuals in a group response. For Big Data users we used the actual reported values or estimated likely values.
Resources in this dataset:Resource Title: Appendix A: ARS data storage survey questions. File Name: Appendix A.pdfResource Description: The full list of questions asked with the possible responses. The survey was not administered using this PDF but the PDF was generated directly from the administered survey using the Print option under Design Survey. Asterisked questions were required. A list of Research Units and their associated codes was provided in a drop down not shown here. Resource Software Recommended: Adobe Acrobat,url: https://get.adobe.com/reader/ Resource Title: CSV of Responses from ARS Researcher Data Storage Survey. File Name: Machine-readable survey response data.csvResource Description: CSV file includes raw responses from the administered survey, as downloaded unfiltered from Survey Monkey, including incomplete responses. Also includes additional classification and calculations to support analysis. Individual email addresses and IP addresses have been removed. This information is that same data as in the Excel spreadsheet (also provided).Resource Title: Responses from ARS Researcher Data Storage Survey. File Name: Data Storage Survey Data for public release.xlsxResource Description: MS Excel worksheet that Includes raw responses from the administered survey, as downloaded unfiltered from Survey Monkey, including incomplete responses. Also includes additional classification and calculations to support analysis. Individual email addresses and IP addresses have been removed.Resource Software Recommended: Microsoft Excel,url: https://products.office.com/en-us/excel
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Honeybees [1] and bumblebees [2] perform learning flights on leaving a newly discovered flower. During these flights, bees spend a portion of the time turning back to face the flower when they can memorise views of the flower and its surroundings. In honeybees, learning flights become longer, when the reward offered by a flower is increased [3]. We show here that bumblebees behave in a similar way and we add that bumblebees face an artificial flower more when the concentration of the sucrose solution that the flower provides is higher. The surprising finding is that that a bee’s size determines what a bumblebee regards as a 'low' or a 'high' concentration and so affects its learning behaviour. The larger bees in a sample of foragers only enhance their flower facing when the sucrose concentration is in the upper range of the flowers that are naturally available to bees [4]. In contrast, smaller bees invest the same effort in facing flowers, whether the concentration is high or low, but their effort is less than that of larger bees. The way in which different sized bees distribute their effort when learning about flowers parallels the foraging behaviour of a colony. Large bumblebees [5] are able to carry larger loads and explore further from the nest than smaller ones [6, 7]. Small ones with a smaller flight range and carrying capacity cannot afford to be as selective and so accept a wider range of flowers.
The data are results from radiative transfer simulations from 390 to 1020 nm in 1nm resolution. They can be convoluted to any ocean colour instrumental spectral response function and therefore represent satellite based aircraft- or groundbased measurements of the remote sensing reflectance. The data is simulated with the radiative transfer code MOMO (Matrix Operator Model), which simulates the full radiative transfer in atmosphere and ocean. The code is hosted at the institute of space sciences at Freie Universität Berlin and is not pubicly available. In addition to molecular Rayleigh scattering one maritime aerosol scatterer is considered. The data is available for 9 solar, 9 viewing zenith and 25 azimuth angles. The remote sensing reflectance is simulated in dependency of IOPs representing pure water with different salinities and 5 water constituents (Chlorophyll-a-pigment, Detritus, Yellow substance, a ’big’ and a ’small’ scatterer) in a global range of concentrations. The IOPs are varied independently. The grid points for each IOP where choosen in order to reproduce the full relation between this particular IOP and the resulting remote sensing reflectance.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Digital flood-inundation map libraries for two reaches that comprise 14.8 miles of the Little and Big Papillion Creeks in Omaha, Nebraska were created by the U.S. Geological Survey (USGS) in cooperation with the Papio-Missouri River Natural Resource District. The flood-inundation maps, which can be accessed through the USGS Flood Inundation Mapping Program website at https://www.usgs.gov/mission-areas/water-resources/science/flood-inundation-mapping-fim-program, depict estimates of the areal extent and depth of flooding corresponding to selected water levels (stages) at the USGS streamgages Little Papillion Creek at Irvington, Nebr. (station 06610750), Little Papillion Creek at Ak-Sar-Ben at Omaha, Nebr. (station 06610765), and Big Papillion Creek at Q Street at Omaha, Nebr. (station 06610770). Near-real-time stages at these streamgages may be obtained from the USGS National Water Information System database at https://doi.org/10.5066/F7P55KJN or from the National Weather Service Advanced Hydrologic Prediction Service at https://water.weather.gov/ahps/. Flood profiles were computed using hydraulic models for two different stream reaches that comprised 14.8 miles of stream length of the Little and Big Papillion Creeks in Omaha. The models were calibrated by adjusting roughness coefficients to best represent the current (2022) stage-streamflow relation at the streamgages within the study reach. The hydraulic models were then used to compute water-surface profiles at 1-foot (ft) stage intervals at selected stage ranges to represent various flooding scenarios at the streamgages in the reach. The simulated water-surface profiles then were combined using a geographic information system with a digital elevation model, which had a 10-ft grid to delineate the area flooded and water depths at each stage. Along with the inundated area maps, polygon shapefiles of areas behind the levees were created to display the uncertainty of these areas if a levee breach were to occur. These 'areas of uncertainty' files have '_breach' appended to the file names in the data release. The availability of these maps, along with information regarding current stage from USGS streamgages, will provide emergency management personnel and residents with information that is critical for flood response activities such as evacuations and road closures, as well as for post-flood recovery efforts.