Facebook
Twitterhttp://inspire.ec.europa.eu/metadata-codelist/LimitationsOnPublicAccess/INSPIRE_Directive_Article13_1ahttp://inspire.ec.europa.eu/metadata-codelist/LimitationsOnPublicAccess/INSPIRE_Directive_Article13_1a
This collection provides access to the ALOS-1 PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) OB1 L1C data acquired by ESA stations (Kiruna, Maspalomas, Matera, Tromsoe) in the ADEN zone, in addition to worldwide data requested by European scientists. The ADEN zone was the area belonging to the European Data node and covered both the European and African continents, a large part of Greenland and the Middle East. The full mission archive is included in this collection, though with gaps in spatial coverage outside of the ADEN zone. With respect to the L1B collection, only scenes acquired in sensor mode with a Cloud Coverage score lower than 70% and a sea percentage lower than 80% are published: Orbits: from 2768 to 27604 Path (corresponds to JAXA track number): from 1 to 665 Row (corresponds to JAXA scene centre frame number): from 310 to 6790. The L1C processing strongly improve accuracy compared to L1B1 from several tenths of metres in L1B1 (~40 m of northing geolocation error for Forward views and ~10-20 m for easting errors) to some metres in L1C scenes (< 10 m both in north and easting errors). The collection contains only the PSM_OB1_1C EO-SIP product type, using data from PRISM operating in OB1 mode with three views (Nadir, Forward, and Backward) at 35 km wide. Most of the products contain all three views, but the Nadir view is always available and is used for the frame number identification. All views are packaged together; each view, in CEOS format, is stored in a directory named according to the JAXA view ID naming convention.
Facebook
Twitterhttp://inspire.ec.europa.eu/metadata-codelist/LimitationsOnPublicAccess/INSPIRE_Directive_Article13_1ahttp://inspire.ec.europa.eu/metadata-codelist/LimitationsOnPublicAccess/INSPIRE_Directive_Article13_1a
This collection is composed of a subset of ALOS-1 PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) OB1 L1C products from the ALOS PRISM L1C collection (DOI: 10.57780/AL1-ff3877f) which have been chosen so as to provide a cloud-free coverage over Europe. 70% of the scenes contained within the collection have a cloud cover percentage of 0%, while the remaining 30% of the scenes have a cloud cover percentage of no more than 20%. The collection is composed of PSM_OB1_1C EO-SIP products, with the PRISM sensor operating in OB1 mode with three views (Nadir, Forward and Backward) at 35 km width.
Facebook
Twitterhttps://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Brood parasites use the parental care of others to raise their young and sometimes employ mimicry to dupe their hosts. The brood-parasitic finches of the genus Vidua are a textbook example of the role of imprinting in sympatric speciation. Sympatric speciation is thought to occur in Vidua because their mating traits and host preferences are strongly influenced by their early host environment. However, this alone may not be sufficient to isolate parasite lineages, and divergent ecological adaptations may also be required to prevent hybridisation collapsing incipient species. Using pattern recognition software and classification models, we provide quantitative evidence that Vidua exhibit specialist mimicry of their grassfinch hosts, matching the patterns, colours and sounds of their respective host’s nestlings. We also provide qualitative evidence of mimicry in postural components of Vidua begging. Quantitative comparisons reveal small discrepancies between parasite and host phenotypes, with parasites sometimes exaggerating their host’s traits. Our results support the hypothesis that behavioural imprinting on hosts has not only enabled the origin of new Vidua species, but also set the stage for the evolution of host-specific, ecological adaptations.
Methods Materials and methods
Fieldwork
During January–April 2013, 2014, 2015, 2016 and 2017, data were collected on nestling morphology, begging calls and postural movements over an area of about 40 km2 on and around Musumanene and Semahwa Farms (centred on 16°47′S, 26°54′E) in the Choma District of southern Zambia. The habitat is a mixture of miombo woodland, grassland and agricultural fields.
Visual mimicry
Photographing Vidua and grassfinch nestling mouths
Eggs were taken from nests in the wild and placed in a Brinsea Octagon 20 Advance EX Incubator at 36.7°C and 60% humidity. Nestling mouths were photographed within a few hours of hatching in the incubator. The chick was held below a prism until the mouth naturally opened, and the mouth then pressed gently over the apex of the prism (PEF2525 equilateral prism, UV fused silica, 25 x 25 mm aperture, Knight Optical, Kent, UK). This allowed the angular interior surfaces of the chick’s mouth to be projected onto the prism face opposite this edge. A wooden block secured the prism and held a 40% Spectralon grey standard (Labsphere, Congleton, UK) in a consistent position. Photos were taken with a Micro-Nikkor 105 mm lens and a Nikon D7000 camera that had undergone a quartz conversion (Advanced Camera Services, Norfolk, UK) to allow sensitivity to both human-visible and UV wavelengths, by replacing the UV and infrared (IR) blocking filter with a quartz sheet. The camera was placed on a tripod and pointed vertically down onto the flat surface of the prism at approximately 50 cm distance. The chick was gently held between thumb and forefinger as it bit on the prism. For each individual nestling, two photos were taken, each with a different filter. UV photographs were taken with a Baader UV pass filter (transmitting 320–380 nm). Human-visible photos were taken with a Baader UV-IR blocking filter (transmitting 420–680 nm). For each photograph the aperture was set to f13, and the shutter speed varied with exposure. A flash (Metz 76 MZ-5 digital) was attached to the camera body via a lateral bracket and had been modified by removal of its UV blocking filter, such that it emitted both visible and UV light. The flash was set to under-expose by 3 stops for the “visible” images, and to over-expose by 3 stops for the “UV” image. ISO was set at 400 and images were taken in RAW (NEF) format. All images were taken indoors in a dark room to minimise ambient light. The setup is shown in Figure S1. Once the photographs had been taken, the chicks were returned to their nests.
Pattern mimicry
Measurements of overall similarity between mouth marking patterns of different species were carried out using NaturePatternMatch (NPM) (Stoddard et al. 2014). NPM is a computer vision program that uses the Scale Invariant Feature Transform (SIFT) algorithm to detect local features in images and gives each pairwise combination of images a similarity score (Lowe 1999, 2004). These features are thought to correspond to those used by birds in real object recognition tasks (Soto and Wasserman 2012) and have been shown to be important in pattern recognition and egg rejection decisions in another host species, the tawny-flanked prinia (Prinia subflava) (Stoddard et al. 2019). Each image was scaled to the same size, using the width of the prism as a reference, such that the edge of the prism was 1500 pixels long. This value was chosen because it approximates the smallest image in the dataset, and thus minimizes any information loss or artefacts caused by scaling up. Only the green channel was taken from each image, as this corresponds most closely with the spectral sensitivity of the double cones in bird vision, thought to be influential in the processing of pattern information (Cronin et al. 2014). The background and the edge of the prism were masked out and the images cropped to size. NPM calculates pairwise pattern differences between images. As a measure of host-parasite similarity, we calculated the mean distance between each Vidua species and each grassfinch species (raw distance). We additionally submitted these pairwise distances to classical multidimensional scaling, which embeds points in an n-dimensional space in which the Euclidean distances between the points are maintained. This allowed a centroid to be calculated for each species (the average of all positions of all samples from that species). We measured the distance between each Vidua species and each grassfinch species in this space (centroid distance). The qualitative results and conclusions were the same for both methods (Table S1). Sample sizes are summarised in Table S6.
Comparison of upper palate spot size between parasites and hosts was carried out using the R package patternize (Van Belleghem et al. 2017), which quantifies variation in colour patterns from digital images. Analysis was carried out using R v3.4.4 (R Core Team 2018). Homologous regions of the mouth in each photograph were identified by placing five landmarks on reference points around the mouth, and the images were aligned to an arbitrarily chosen reference image. This allowed patterns to be compared among images even if there were slight differences in the distances between camera and chick and in the positioning of the chick within the image. To extract the black upper palate markings, thresholds were manually adjusted for red, green and blue colour channels for each image and their success at extracting black patterns assessed. Some manual adjustment of thresholds was needed between images to account for differences in lighting conditions and ensure that patterns were accurately extracted. Shaded regions that had been erroneously identified as pattern were manually removed from the selection. To compare spot size between hosts and parasites, the number of pixels in the standardised images that each of the upper palate spots contained was calculated for every individual. The spot size was then calculated relative to the overall size of the mouth. Comparisons were performed with Wilcoxon tests in R (R Core Team 2018). The sample sizes for the comparison of spot sizes were the same as for the analysis of pattern mimicry (see Table S6).
Colour mimicry
Raw pixel values from the red, green and blue channels for both the visual and the UV images were extracted from regions of interest (ROIs) in nestling mouth images using the Multispectral Image plugin in Image J (Schneider et al. 2012; Troscianko and Stevens 2015). Chosen ROIs were: 1) gape flanges, 2) outer upper palate (distal to medial palate spot), 3) inner upper palate (proximal to medial palate spot), 4) medial palate spot. ROIs 1, 2 and 3 were selected separately on right and left-hand sides of the chick’s mouth and a mean score of the two values was used. The medial palate spot lies along the bilateral line of symmetry for the chick’s mouth and so only a single ROI was required. Raw pixel values were converted into avian cone capture values based on the cut-throat finch (Amadina fasciata) visual system (Hart et al. 2000a) using Microsoft Excel version 15.30. The cut-throat finch is the most closely-related grassfinch species to the hosts of Vidua finches for which visual sensitivities have been calculated (Olsson and Alstrom 2020).
Cone-capture values for each image were analysed with a discriminant function analysis (DFA) using the MASS package in R (Venables and Ripley 2002). A multinomial logistic regression (MLR) was also carried out on the same dataset. While both DFA and MLR can be used to address questions about categorisation, MLR has fewer restrictive assumptions than DFA. However, DFA is thought to be a better approach when sample sizes are small (Pohar et al. 2004). For DFA and MLR, the models were initially trained on cone capture values of the images from the 10 co-occurring grassfinch species we photographed at our study site. The results from both MLR and DFA were similar (Table S2) and so only the DFA results are reported in the main text. Sample sizes are summarised in Table S6. MLR was implemented using the multinom function from the R package nnet (Venables and Ripley 2002). DFA was implemented using the lda function from the R package MASS ((Venables and Ripley 2002). The observed versus expected percentages were compared using the binom.test function in R base stats package (R Development Core Team 2017).
The DFA/MLR models were initially trained on cone-catch values of the estrildid data. The training data consisted of 3 locust finch (Paludipasser locustella), 32 common waxbill, 10 blue waxbill (Uraeginthus angolensis), 7 green-winged pytilia (Pytilia melba), 5
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
Twitterhttp://inspire.ec.europa.eu/metadata-codelist/LimitationsOnPublicAccess/INSPIRE_Directive_Article13_1ahttp://inspire.ec.europa.eu/metadata-codelist/LimitationsOnPublicAccess/INSPIRE_Directive_Article13_1a
This collection provides access to the ALOS-1 PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) OB1 L1C data acquired by ESA stations (Kiruna, Maspalomas, Matera, Tromsoe) in the ADEN zone, in addition to worldwide data requested by European scientists. The ADEN zone was the area belonging to the European Data node and covered both the European and African continents, a large part of Greenland and the Middle East. The full mission archive is included in this collection, though with gaps in spatial coverage outside of the ADEN zone. With respect to the L1B collection, only scenes acquired in sensor mode with a Cloud Coverage score lower than 70% and a sea percentage lower than 80% are published: Orbits: from 2768 to 27604 Path (corresponds to JAXA track number): from 1 to 665 Row (corresponds to JAXA scene centre frame number): from 310 to 6790. The L1C processing strongly improve accuracy compared to L1B1 from several tenths of metres in L1B1 (~40 m of northing geolocation error for Forward views and ~10-20 m for easting errors) to some metres in L1C scenes (< 10 m both in north and easting errors). The collection contains only the PSM_OB1_1C EO-SIP product type, using data from PRISM operating in OB1 mode with three views (Nadir, Forward, and Backward) at 35 km wide. Most of the products contain all three views, but the Nadir view is always available and is used for the frame number identification. All views are packaged together; each view, in CEOS format, is stored in a directory named according to the JAXA view ID naming convention.