National, regional
Households
Sample survey data [ssd]
The 2020 Vietnam COVID-19 High Frequency Phone Survey of Households (VHFPS) uses a nationally representative household survey from 2018 as the sampling frame. The 2018 baseline survey includes 46,980 households from 3132 communes (about 25% of total communes in Vietnam). In each commune, one EA is randomly selected and then 15 households are randomly selected in each EA for interview. We use the large module of to select the households for official interview of the VHFPS survey and the small module households as reserve for replacement. After data processing, the final sample size for Round 2 is 3,935 households.
Computer Assisted Telephone Interview [cati]
The questionnaire for Round 2 consisted of the following sections
Section 2. Behavior Section 3. Health Section 5. Employment (main respondent) Section 6. Coping Section 7. Safety Nets Section 8. FIES
Data cleaning began during the data collection process. Inputs for the cleaning process include available interviewers’ note following each question item, interviewers’ note at the end of the tablet form as well as supervisors’ note during monitoring. The data cleaning process was conducted in following steps:
• Append households interviewed in ethnic minority languages with the main dataset interviewed in Vietnamese.
• Remove unnecessary variables which were automatically calculated by SurveyCTO
• Remove household duplicates in the dataset where the same form is submitted more than once.
• Remove observations of households which were not supposed to be interviewed following the identified replacement procedure.
• Format variables as their object type (string, integer, decimal, etc.)
• Read through interviewers’ note and make adjustment accordingly. During interviews, whenever interviewers find it difficult to choose a correct code, they are recommended to choose the most appropriate one and write down respondents’ answer in detail so that the survey management team will justify and make a decision which code is best suitable for such answer.
• Correct data based on supervisors’ note where enumerators entered wrong code.
• Recode answer option “Other, please specify”. This option is usually followed by a blank line allowing enumerators to type or write texts to specify the answer. The data cleaning team checked thoroughly this type of answers to decide whether each answer needed recoding into one of the available categories or just keep the answer originally recorded. In some cases, that answer could be assigned a completely new code if it appeared many times in the survey dataset.
• Examine data accuracy of outlier values, defined as values that lie outside both 5th and 95th percentiles, by listening to interview recordings.
• Final check on matching main dataset with different sections, where information is asked on individual level, are kept in separate data files and in long form.
• Label variables using the full question text.
• Label variable values where necessary.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Critical to any regression analysis is the identification of observations that exert a strong influence on the fitted regression model. Traditional regression influence statistics such as Cook's distance and DFFITS, each based on deleting single observations, can fail in the presence of multiple influential observations if these influential observations “mask” one another, or if other effects such as “swamping” occur. Masking refers to the situation where an observation reveals itself as influential only after one or more other observations are deleted. Swamping occurs when points that are not actually outliers/influential are declared to be so because of the effects on the model of other unusual observations. One computationally expensive solution to these problems is the use of influence statistics that delete multiple rather than single observations. In this article, we build on previous work to produce a computationally feasible algorithm for detecting an unknown number of influential observations in the presence of masking. An important difference between our proposed algorithm and existing methods is that we focus on the data that remain after observations are deleted, rather than on the deleted observations themselves. Further, our approach uses a novel confirmatory step designed to provide a secondary assessment of identified observations. Supplementary materials for this article are available online.
For the first 5 years of INTEGRALs operational life, the scientific Core Programme included a key component that was regular scans of the Galactic Plane. These led to a wealth of discoveries of new sources and source types, a large fraction of which were highly transient. These discoveries can certainly be considered one of the strongest results from, and legacies of, INTEGRAL. Since AO5, these regular scans have been discontinued, and this has resulted in a significant drop in the discovery rate of new systems in and around the plane of our Galaxy. We propose to reinstate the Galactic Plane Scans as a Key Programme throughout AO8 and AO9, to allow the regular monitoring of known systems, and dramatically enhance the chances of discovering new systems. Such a programme will be of high value to a very large fraction of the high energy astronomy community, stimulating science immediately, and furthermore contributing greatly to the INTEGRAL legacy.To this aim, a total of 2 Msec /year are necessary to cover the plane with regular scans every orbit, excluding the central zone to be covered by the Galactic Bulge monitoring programme (should that programme be accepted). We also suggest that in order to maximise the engagement of the scientific community, the observations should be made public immediately. The team will make the IBIS and JEMX light curves in two energy bands per science window and per observation, as well as the mosaic images publicly available through the web as soon as possible after the observations have been performed. Any interesting source behaviour that emerges from our observations will be announced promptly, so that rapid followup by the community is possible. truncated!, Please see actual data for full text [truncated!, Please see actual data for full text]
The initial Phoenix Deep Survey (PDS) observations with the Australia Telescope Compact Array (ATCA) have been supplemented by additional 1.4 GHz observations over the past few years. Here we present details of the construction of a new mosaic image covering an area of 4.56 deg2 referred to as the Phoenix Deep field (PDF), an investigation of the reliability of the source measurements, and the 1.4 GHz source counts for the compiled radio catalog. The mosaic achieves a 1-sigma rms noise of 12 µJy at its most sensitive, and a homogeneous radio-selected catalog of over 2000 sources reaching flux densities as faint as 60 µJy has been compiled. The source parameter measurements are found to be consistent with the expected uncertainties from the image noise levels and the Gaussian source fitting procedure. A radio-selected sample avoids the complications of obscuration associated with optically selected samples, and by utilizing complementary PDS observations, including multicolor optical, near-infrared, and spectroscopic data, this radio catalog will be used in a detailed investigation of the evolution in star formation spanning the redshift range 0 < z < 1. The homogeneity of the catalog ensures a consistent picture of galaxy evolution can be developed over the full cosmologically significant redshift range of interest. The PDF covers a high-latitude region that is of low optical obscuration and devoid of bright radio sources. ATCA 1.4 GHz observations were made in 1994, 1997, 1999, 2000, and 2001 in the 6A, 6B, and 6C array configurations, accumulating a total of 523 hr of observing time. The initial 1994 ATCA observations (Hopkins et al. 1998, MNRAS, 296, 839; Hopkins 1998, PhD thesis) consisted of 30 pointings on a hexagonal tessellation, resulting in a 2 degrees diameter field centered on R.A. = 01h 14m 12.16s, Dec = -45o 44' 8.0" (J2000.0), with roughly uniform sensitivity of about 60 µJy rms. This survey was supplemented from 1997 to 2001 by extensive observations of a further 19 pointings situated on a more finely spaced hexagonal grid, centered on R.A. = 01h 11m 13.0s, Dec = -45o 45' 00" (J2000.0). The locations of all pointing centers are given in Table 1 of the reference paper. The final mosaic constructed from all 49 pointings was trimmed to remove the highest noise regions at the edges by masking out regions with an rms noise level greater than 0.25 mJy. The trimmed PDF mosaic image covers an area of 4.56 deg2 and reaches to a measured level of 12 µJy rms noise in the most sensitive regions. The table contained here is the final merged catalog of PDS surveys based on the union of the 10% false discovery rate (FDR) threshold catalog (PDS_atca_fdr10_full_vis.cat) for the trimmed mosaic, visually edited to remove objects clearly associated with artifacts close to bright sources, containing 2058 sources, and the 10% FDR threshold catalog (PDS_atca_fdr10_deep.cat) for the 33' x 33' region centered on the most sensitive portion of the mosaic, containing 491 sources. The merged catalog was constructed to contain all unique catalogued sources; where common sources were identified, only the entry from PDS_atca_fdr10_deep.cat was retained. There are a total of 2148 sources in the final merged catalog, of which up to 10% may be false. This table was created by the HEASARC in November 2012 based on the file PDS_atca_fdr10_merge.cat, the merged PDS catalog (derived from the individual catalogs PDS_atca_fdr10_full_vis.cat and PDS_atca_fdr10_deep.cat as discussed in the Overview above), which was obtained from the first author's website https://web.archive.org/web/20171009234923/www.physics.usyd.edu.au/~ahopkins/phoenix/. Some of the values for the name parameter in the HEASARC's implementation of this table were corrected in April 2018. This is a service provided by NASA HEASARC .
Open Government Licence - Canada 2.0https://open.canada.ca/en/open-government-licence-canada
License information was derived automatically
Precipitation measurements in the Environment and Climate Change Canada (ECCC) surface network are a necessary component for monitoring weather and climate and are required for flood and water resource forecasting, numerical weather prediction and many other applications that impact the health and safety of Canadians. Beginning in the late 1990s, the ECCC surface network began a transition from manual to automated precipitation measurements. Advantages to increased automation include enhanced capabilities for monitoring in remote locations and higher observation frequency at lower cost. However, transition to automated precipitation gauges has resulted in new challenges to data quality, accuracy, and homogenization. Automated weighing precipitation gauges used in the ECCC operational network, because of their physical profile, tend to measure less precipitation falling as snow because lighter particles (snow) are deflected away from the collector by the wind flow around the gauge orifice. This phenomenon of wind-induced systematic bias is well documented in the literature. The observation requires an adjustment depending on gauge and shield configuration, precipitation phase, temperature, and wind speed. Hourly precipitation, wind speed, and temperature for 397 ECCC automated surface weather stations were retrieved from the ECCC national archive. Climate Research Division (CRD) selected this sub-set of stations because they are critical to the continuity of various climate analysis. The observation period varies by station with the earliest data series beginning in 2001 (with most beginning in 2004). The precipitation data was quality controlled using established techniques to identify and flag outliers, remove spurious observations, and correct for previously identified filtering errors. The resulting hourly precipitation data was adjusted for wind bias using the WMO Solid Precipitation Inter-Comparison Experiment (SPICE) Universal Transfer Function (UTF) equation. A full description of this data set, including the station locations, data format, methodology, and references are included in the repository. There are now multiple versions of this dataset available, with the later versions being the most up to date and employing the most advanced adjustment techniques. Information on versioning is included in the documentation.
Led by the Electronics System Analyst, these unsung heroes of the NWS keep life-saving observation and transmission equipment operational 24/7/365. The three main types of equipment maintained by ETs are below:NOAA Weather Radio transmittersAutomated Surface Observation SystemsNEXRAD WSR-88D Radar Click on each to learn a bit more!
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Abstract: This data set provides feeding rates of juvenile sticklebacks and whitefish of various sizes feeding on seven different zooplankton species in aquaria. The data set has been analysed in Ogorelec et al. (2022) Can young-of-the-year invasive fish keep up with young-of-the-year native fish? A comparison of feeding rates between invasive sticklebacks and whitefish, Ecology & Evolution Other: The data set consists of 300 feeding observations and 5 columns: 1) fish species, 2) fish size [total size in cm], 3) fish ID, 4) zooplankton species, and 5) feeding rate (number of zooplankton ingested per 3 minutes investigation period]
The survey of the Galactic Plane has been one of the main INTEGRAL scientific objectives both because of the scientific potential of new source discovery and detailed monitoring of already known sources. We propose for a total of 3Ms observations (between AO12 and AO13) to continue our Key Program started in AO8 for covering the Plane with regular scans every orbit. Our purpose is to regularly monitor known systems as well as to dramatically enhance the chances of discovering new systems during the regular scans of our Galaxy. This program will allow a rapid response to bright events and a detailed study of faint transients and their long term activity. Such a programme will be of high value to a very large fraction of the highenergy astronomy community, stimulating science immediately, and furthermore contributing greatly to the INTEGRAL legacy. Also, in order to maximise the engagement of the scientific community, we will continue to make the observations public immediately. The team will make, as it has through AO811, the scwresolution IBIS and JEMX light curves (in two energy bands) and per revolution mosaic images publicly available through the web as soon as possible after the observations have been performed. Any interesting source behaviour that emerges from our observations will be announced promptly, so that rapid followup by the community is possible, as it has been already demonstrated. truncated!, Please see actual data for full text [truncated!, Please see actual data for full text]
The Room environment - v0
We have released a challenging Gymnasium compatible environment. The best strategy for this environment is to have both episodic and semantic memory systems. See the paper for more information.
Prerequisites
A unix or unix-like x86 machine python 3.10 or higher. Running in a virtual environment (e.g., conda, virtualenv, etc.) is highly recommended so that you don't mess up with the system python. This env is added to the PyPI server. Just run: pip install room-env
Data collection Data is collected from querying ConceptNet APIs. For simplicity, we only collect triples whose format is (head, atlocation, tail). Here head is one of the 80 MS COCO dataset categories. This was kept in mind so that later on we can use images as well.
If you want to collect the data manually, then run below:
python collect_data.py
How does this environment work? The Gymnasium-compatible Room environment is one big room with Npeople number of people who can freely move around. Each of them selects one object, among Nobjects, and places it in one of the Nlocations locations. Nagents number of agent(s) are also in this room. They can only observe one human placing an object, one at a time; x(t). At the same time, they are given one question about the location of an object; q(t). x(t) is given as a quadruple, (h(t),r(t),t(t),t), For example,
The reason why the observations and questions are given as RDF-triple-like format is two folds. One is that this structured format is easily readable / writable by both humans and machines. Second is that we can use existing knowledge graphs, such as ConceptNet .
To simplify the environment, the agents themselves are not actually moving, but the room is continuously changing. There are several random factors in this environment to be considered:
With the chance of pcommonsense, a human places an object in a commonsense location (e.g., a laptop on a desk). The commonsense knowledge we use is from ConceptNet. With the chance of 1 − pcommonsense, an object is placed at a non-commonsense random location (e.g., a laptop on the tree).
With the chance of pnew_location, a human changes object location.
With the chance of pnew_object, a human changes his/her object to another one.
With the chance of pswitch_person, two people switch their locations. This is done to mimic an agent moving around the room.
All of the four probabilities account for the Bernoulli distributions.
Consider there is only one agent. Then this is a POMDP, where St = (x(t), q(t)), At = (do something with x(t), answer q(t)), and Rt ∈ {0, 1}.
Currently there is no RL trained for this. We only have some heuristics. Take a look at the paper for more details.
RoomEnv-v0 ```python import gymnasium as gym
env = gym.make("room_env:RoomEnv-v0") (observation, question), info = env.reset() rewards = 0
while True: (observation, question), reward, done, truncated, info = env.step("This is my answer!") rewards += reward if done: break
print(rewards) ```
Every time when an agent takes an action, the environment will give you an observation and a question to answer. You can try directly answering the question, such as env.step("This is my answer!"), but a better strategy is to keep the observations in memory systems and take advantage of the current observation and the history of them in the memory systems.
Take a look at this repo for an actual interaction with this environment to learn a policy.
Contributing Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are greatly appreciated.
Fork the Project Create your Feature Branch (git checkout -b feature/AmazingFeature) Run make test && make style && make quality in the root repo directory, to ensure code quality. Commit your Changes (git commit -m 'Add some AmazingFeature') Push to the Branch (git push origin feature/AmazingFeature) Open a Pull Request
Cite our paper bibtex @misc{https://doi.org/10.48550/arxiv.2204.01611, doi = {10.48550/ARXIV.2204.01611}, url = {https://arxiv.org/abs/2204.01611}, author = {Kim, Taewoon and Cochez, Michael and Francois-Lavet, Vincent and Neerincx, Mark and Vossen, Piek}, keywords = {Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {A Machine With Human-Like Memory Systems}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} }
Authors
Taewoon Kim Michael Cochez Vincent Francois-Lavet Mark Neerincx Piek Vossen
License MIT
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
HerpMapper is a 501(c)(3) nonprofit organization designed to gather and share information about reptile and amphibian observations across the planet. Using HerpMapper, you can create records of your herp observations and keep them all in one place. In turn, your data is made available to HerpMapper Partners ? groups who use your recorded observations for research, conservation, and preservation purposes. Your observations can make valuable contributions on the behalf of amphibians and reptiles.
Who can see the records you create? There are two levels of visibility for records. Only you and HerpMapper Partners have access to all data in a record. Other users of HerpMapper and the general public can only see very basic information in your records ? they do not have access to exact locality data. Any pictures attached to a record can be seen by everyone, which means you can also see the cool herps being recorded by other people from around the world.
Who are the HerpMapper Partners? For the most part, they are biologists working for state or regional agencies, university researchers, or conservation organizations. A list of HerpMapper Partners is maintained on the HerpMapper website.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset and codes for "Observation of Acceleration and Deceleration Periods at Pine Island Ice Shelf from 1997–2023 "
The MATLAB codes and related datasets are used for generating the figures for the paper "Observation of Acceleration and Deceleration Periods at Pine Island Ice Shelf from 1997–2023".
Files and variables
File 1: Data_and_Code.zip
Directory: Main_function
**Description:****Include MATLAB scripts and functions. Each script include discriptions that guide the user how to used it and how to find the dataset that used for processing.
MATLAB Main Scripts: Include the whole steps to process the data, output figures, and output videos.
Script_1_Ice_velocity_process_flow.m
Script_2_strain_rate_process_flow.m
Script_3_DROT_grounding_line_extraction.m
Script_4_Read_ICESat2_h5_files.m
Script_5_Extraction_results.m
MATLAB functions: Five Files that includes MATLAB functions that support the main script:
1_Ice_velocity_code: Include MATLAB functions related to ice velocity post-processing, includes remove outliers, filter, correct for atmospheric and tidal effect, inverse weited averaged, and error estimate.
2_strain_rate: Include MATLAB functions related to strain rate calculation.
3_DROT_extract_grounding_line_code: Include MATLAB functions related to convert range offset results output from GAMMA to differential vertical displacement and used the result extract grounding line.
4_Extract_data_from_2D_result: Include MATLAB functions that used for extract profiles from 2D data.
5_NeRD_Damage_detection: Modified code fom Izeboud et al. 2023. When apply this code please also cite Izeboud et al. 2023 (https://www.sciencedirect.com/science/article/pii/S0034425722004655).
6_Figure_plotting_code:Include MATLAB functions related to Figures in the paper and support information.
Director: data_and_result
Description:**Include directories that store the results output from MATLAB. user only neeed to modify the path in MATLAB script to their own path.
1_origin : Sample data ("PS-20180323-20180329", “PS-20180329-20180404”, “PS-20180404-20180410”) output from GAMMA software in Geotiff format that can be used to calculate DROT and velocity. Includes displacment, theta, phi, and ccp.
2_maskccpN: Remove outliers by ccp < 0.05 and change displacement to velocity (m/day).
3_rockpoint: Extract velocities at non-moving region
4_constant_detrend: removed orbit error
5_Tidal_correction: remove atmospheric and tidal induced error
6_rockpoint: Extract non-aggregated velocities at non-moving region
6_vx_vy_v: trasform velocities from va/vr to vx/vy
7_rockpoint: Extract aggregated velocities at non-moving region
7_vx_vy_v_aggregate_and_error_estimate: inverse weighted average of three ice velocity maps and calculate the error maps
8_strain_rate: calculated strain rate from aggregate ice velocity
9_compare: store the results before and after tidal correction and aggregation.
10_Block_result: times series results that extrac from 2D data.
11_MALAB_output_png_result: Store .png files and time serties result
12_DROT: Differential Range Offset Tracking results
13_ICESat_2: ICESat_2 .h5 files and .mat files can put here (in this file only include the samples from tracks 0965 and 1094)
14_MODIS_images: you can store MODIS images here
shp: grounding line, rock region, ice front, and other shape files.
File 2 : PIG_front_1947_2023.zip
Includes Ice front positions shape files from 1947 to 2023, which used for plotting figure.1 in the paper.
File 3 : PIG_DROT_GL_2016_2021.zip
Includes grounding line positions shape files from 1947 to 2023, which used for plotting figure.1 in the paper.
Data was derived from the following sources:
Those links can be found in MATLAB scripts or in the paper "**Open Research" **section.
The survey of the Galactic Plane has continued as one of the main INTEGRAL scientific objectives because of the scientificpotential of both new source discovery and detailed monitoring of already known sources. We propose for a total of 3.0 Ms of observations in AO14 and 3 Ms in AO15 to continue our Key Program covering areas of the Plane with regular scans every orbit. For these AOs we selected a section of the Plane incorporating 2 sky regions that are rich in galactic sources (HMXB and SFXT in particular) and potential MeV/TeV sources.Our purpose is to regularly monitor known systems as well as to dramatically enhance the chances of discovering new systems or new outbursts from known sources. This program will allow a rapid response to bright events and a detailed study of fainttransients and their long term activity. Such a programme will be of high value to a very large fraction of the highenergyastronomy community, stimulating science immediately, and furthermore contributing greatly to the INTEGRAL legacy.In order to maximise the engagement of the scientific community, we will continue to make the observations publicimmediately. The team will make, as it has through AO812 and now in AO13, the scwresolution IBIS and JEMX lightcurves (in two energy bands) and per revolution mosaic images publicly available through the web as soon as possible afterthe observations have been performed. Any interesting source behaviour that emerges from our observations will beannounced promptly, so that rapid followup by the community is possible, as it has been already demonstrated. truncated!, Please see actual data for full text [truncated!, Please see actual data for full text]
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Marine imaging has evolved from small, narrowly focussed applications to large-scale applications covering areas of several hundred square kilometers or time series covering observation periods of several months. The analysis and interpretation of the accumulating large volume of digital images or videos will continue to challenge the marine science community to keep this process efficient and effective. It is safe to say that any strategy will rely on some software platform supporting manual image and video annotation, either for a direct manual annotation-based analysis or for collecting training data to deploy a machine learning–based approach for (semi-)automatic annotation. This paper describes how computer-assisted manual full-frame image and video annotation is currently performed in marine science and how it can evolve to keep up with the increasing demand for image and video annotation and the growing volume of imaging data. As an example, observations are presented how the image and video annotation tool BIIGLE 2.0 has been used by an international community of more than one thousand users in the last 4 years. In addition, new features and tools are presented to show how BIIGLE 2.0 has evolved over the same time period: video annotation, support for large images in the gigapixel range, machine learning assisted image annotation, improved mobility and affordability, application instance federation and enhanced label tree collaboration. The observations indicate that, despite novel concepts and tools introduced by BIIGLE 2.0, full-frame image and video annotation is still mostly done in the same way as two decades ago, where single users annotated subsets of image collections or single video frames with limited computational support. We encourage researchers to review their protocols for education and annotation, making use of newer technologies and tools to improve the efficiency and effectivity of image and video annotation in marine science.
The Hadley Centre at the U.K. Met Office has created a global sub-daily dataset of several station-observed climatological variables which is derived from and is a subset of the NCDC's ... Integrated Surface Database. Stations were selected for inclusion into the dataset based on length of the data reporting period and the frequency with which observations were reported. The data were then passed through a suite of automated quality-control tests to remove bad data. See the HadISD web page for more details and access to previous versions of the dataset.
This data includes the coverage data set of vegetation in one growth cycle in five stations of Daman super station, wetland, desert, desert and Gobi, and the biomass data set of maize and wetland reed in one growth cycle in Daman super station. The observation time starts from May 10, 2014 and ends on September 11, 2014.
1 coverage observation
1.1 observation time
1.1.1 super station: the observation period is from May 10 to September 11, 2014. Before July 20, the observation is once every five days. After July 20, the observation is once every 10 days. A total of 17 observations are made. The specific observation time is as follows:;
Super stations: May 10, 15, 20, 25, 30, 10, 15, 20, 20, 30, 30, 30, 30, 30, 7, 10, 10, 10, 10, 10, 15
1.1.2 other four stations: the observation period is from May 20 to September 15, 2014, once every 10 days, and 11 observations have been made in total. The specific observation time is as follows:;
Other four stations: May 10, 2014, May 20, 2014, May 30, 2014, June 10, 2014, June 20, 2014, June 30, July 10, 2014, July 20, August 5, 2014, August 17, 2014, September 11, 2014
1.2 observation method
1.2.1 measuring instruments and principles:
The digital camera is placed on the instrument platform at the front end of the simple support pole to keep the shooting vertical and downward and remotely control the camera measurement data. The observation frame can be used to change the shooting height of the camera and realize targeted measurement for different types of vegetation.
1.2.2 design of sample
Super station: take 3 plots in total, the sample size of each plot is 10 × 10 meters, take photos along two diagonal lines in turn each time, take 9-10 photos in total; Wetland station: take 2 sample plots, each plot is 10 × 10 meters in size, and take 9-10 photos for each survey;
3 other stations: select 1 sample plot, each sample plot is 10 × 10 meters in size, and take 9-10 photos for each survey;
1.2.3 shooting method
For the super station corn and wetland station reed, the observation frame is directly used to ensure that the camera on the observation frame is far higher than the vegetation crown height. Samples are taken along the diagonal in the square quadrat, and then the arithmetic average is made. In the case of a small field angle (< 30 °), the field of view includes more than 2 ridges with a full cycle, and the side length of the photo is parallel to the ridge; in the other three sites, due to the relatively low vegetation, the camera is directly used to take pictures vertically downward (without using the bracket).
1.2.4 coverage calculation
The coverage calculation is completed by Beijing Normal University, and an automatic classification method is adopted. For details, see article 1 of "recommended references". By transforming RGB color space to lab space which is easier to distinguish green vegetation, the histogram of green component A is clustered to separate green vegetation and non green background, and the vegetation coverage of a single photo is obtained. The advantage of this method lies in its simple algorithm, easy to implement and high degree of automation and precision. In the future, more rapid, automatic and accurate classification methods are needed to maximize the advantages of digital camera methods.
2 biomass observation
2.1 observation time
2.1.1 corn: the observation period is from May 10 to September 11, 2014, once every 5 days before July 20, and once every 10 days after July 20. A total of 17 observations have been made. The specific observation time is as follows:;
Super stations: May 10, 15, 20, 25, 30, 10, 15, 20, 20, 30, 30, 30, 30, 30, 7, 10, 10, 10, 10, 10, 15
2.1.2 Reed: the observation period is from May 20 to September 15, 2014, once every 10 days, and 11 observations have been made in total. The specific observation time is as follows:; 2014-5-10、2014-5-20、2014-5-30、2014-6-10、2014-6-20、2014-6-30、2014-7-10、2014-7-20、2014-8-5、2014-8-17、2014-9-11
2.2 observation method
Corn: select three sample plots, and select three corn plants that represent the average level of each sample plot for each observation, respectively weigh the fresh weight (aboveground biomass + underground biomass) and the corresponding dry weight (85 ℃ constant temperature drying), and calculate the biomass of unit area corn according to the plant spacing and row spacing;
Reed: set two 0.5m × 0.5m quadrats, cut them in the same place, and weigh the fresh weight (stem and leaf) and dry weight (constant temperature drying at 85 ℃) of reed respectively.
2.3 observation instruments
Balance (accuracy 0.01g), oven.
3 data storage
From 1989 to 2005, we have discovered a total of 23 millisecond pulsars in 47 Tucanae and obtained coherent timing solutions for 19 of those. This dataset has already allowed studies of stellar …Show full descriptionFrom 1989 to 2005, we have discovered a total of 23 millisecond pulsars in 47 Tucanae and obtained coherent timing solutions for 19 of those. This dataset has already allowed studies of stellar evolution and cluster dynamics and the first detection of the interstellar medium in a globular cluster. The remaining scientific objectives of this project can now be accomplished with less intensive, long-term timing: we want to keep track of the rotational phase of these pulsars and increase the number and precision of measured proper motions. The full analysis of all the previous data is still ongoing: we have recently found a new pulsar (47TucZ), and two very good new candidates, confirmed a previously know object (47TucX), and determined the timing solutions of three others (the binary pulsar with the shortest orbital period, 47 Tuc R, another eclipsing binary pulsar, 47TucW, and a newly dicovered binary, 47 Tuc Y). This has allowed the identification of these three pulsars in X-rays using Chandra and the identification of the companion of 47TucY in optical HST data. Some of the 47 Tuc pulsars are now becoming detectable at gamma-rays with Fermi, but we need to keep updated radio ephemerides. The metadata and files (if any) are available to the public.
The world is filled with nature watchers, from trampers to hunters, birders to beach-combers, and pros to school kids. Many of us keep notes of what we find. What if all those observations could be shared online? You might learn about the butterflies that live in your neighbourhood, or discover someone who knows all about the plants in your favourite reserve. For a long time, everyone's notes have been scattered in notebooks, private spreadsheets and dusty library shelves. As a society, we have seen a lot but collectively we remain blind to most changes in our biodiversity. If enough people record their observations on NatureWatch NZ, we can change all this. We can build a living record of life in New Zealand that scientists and environmental managers can use to monitor changes in biodiversity, and that anyone can use to learn more about New Zealand's amazing natural history. Only "research-quality" observations are used in this data set - that is observations that have their species identification peer-reviewed by at least one independent source. All biodiversity observations are available at http://naturewatch.org.nz/. NatureWatch NZ is run by the New Zealand Bio-Recording Network Trust, a charitable trust dedicated to bio-recording. Our lofty aims are: (1) To increase knowledge, understanding, and appreciation of New Zealand's natural history.(2) To engage and assist New Zealanders in observing and recording biological information.(3)To develop and support online tools to assist individuals and groups to record, view, share and use biological information. (4) To collaborate with people and groups interested in bio-recording. (5) To promote and provide secure, open, and ethical sources of biological information for the public.
Most of us understand the hydrologic cycle in terms of the visible paths that water can take such as rainstorms, rivers, waterfalls and lakes. However, an even larger volume of water flows through the air all around us in two invisible paths: evaporation and transpiration. These two paths together are referred to as evapotranpsiration (ET), and claim 61% of all terrestrial precipitation. Solar radiation, air temperature, wind speed, soil moisture, and land cover all affect the rate of evapotranspiration, which is a major driver of the global water cycle, and key component of most catchments' water budget. This map contains a historical record showing the volume of water lost to evapotranspiration during each month from March 2000 to the present.Dataset SummaryThe GLDAS Evapotranspiration layer is a time-enabled image service that shows total actual evapotranspiration monthly from 2000 to the present, measured in millimeters of water loss. It is calculated by NASA using the Noah land surface model, run at 0.25 degree spatial resolution using satellite and ground-based observational data from the Global Land Data Assimilation System (GLDAS-1). The model is run with 3-hourly time steps and aggregated into monthly averages. Review the complete list of model inputs, explore the output data (in GRIB format), and see the full Hydrology Catalog for all related data and information!What can you do with this layer?This layer is suitable for both visualization and analysis. It can be used in ArcGIS Online in web maps and applications and can be used in ArcGIS for Desktop. It is useful for scientific modeling, but only at global scales. Time: This is a time-enabled layer. It shows the total evaporative loss during the map's time extent, or if time animation is disabled, a time range can be set using the layer's multidimensional settings. The map shows the sum of all months in the time extent. Minimum temporal resolution is one month; maximum is one year.Important: You must switch from the cartographic renderer to the analytic renderer in the processing template tab in the layer properties window before using this layer as an input to geoprocessing tools.This layer has query, identify, and export image services available. This layer is part of a larger collection of earth observation maps that you can use to perform a wide variety of mapping and analysis tasks.The Living Atlas of the World provides an easy way to explore the earth observation layers and many other beautiful and authoritative maps on hundreds of topics.Geonet is a good resource for learning more about earth observations layers and the Living Atlas of the World. Follow the Living Atlas on GeoNet.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
(:unav)...........................................
The survey of the Galactic Plane has continued as one of the main INTEGRAL scientific objectives because of the scientific potential of both new source discovery and detailed monitoring of already known sources that will provide a legacy archive and a basis for future missions. We propose for a total of 2Ms of observation in AO20 to continue our previous approved Key Program covering areas of the Plane with regular scans every orbit. For this AO we selected again the section of the Plane incorporating two sky regions centred at l+/30, that are both rich in galactic sources (LMXB and HMXB, SFXT in particular) and potential MeV/TeV sources. In view of the new zflip mission strategy, our priority is still this but we are flexible to extend the program along the Plane as during the AO18/AO19. Our purpose is to regularly monitor known systems as well as to dramatically enhance the chances of discovering new systems or new outbursts from known sources. This program will allow a rapid response to bright events and a detailed study of faint transients and their longterm activity. Such a programme will be of high value to a very large fraction of the highenergy astronomy community, stimulating science immediately, and furthermore contributing greatly to the INTEGRAL legacy.In order to maximise the engagement of the scientific community, we will continue to make the observations public immediately. The team will make, as it has done through AO819, the scwresolution IBIS and JEMX light curves (in two energy bands) and perrevolution mosaic images publicly available through the web as soon as possible. Any interesting source behaviour that emerges from our observations will be announced promptly in the usual way through Atels1, so that rapid followup by the community is possible, as has already been demonstrated. truncated!, Please see actual data for full text [truncated!, Please see actual data for full text]
National, regional
Households
Sample survey data [ssd]
The 2020 Vietnam COVID-19 High Frequency Phone Survey of Households (VHFPS) uses a nationally representative household survey from 2018 as the sampling frame. The 2018 baseline survey includes 46,980 households from 3132 communes (about 25% of total communes in Vietnam). In each commune, one EA is randomly selected and then 15 households are randomly selected in each EA for interview. We use the large module of to select the households for official interview of the VHFPS survey and the small module households as reserve for replacement. After data processing, the final sample size for Round 2 is 3,935 households.
Computer Assisted Telephone Interview [cati]
The questionnaire for Round 2 consisted of the following sections
Section 2. Behavior Section 3. Health Section 5. Employment (main respondent) Section 6. Coping Section 7. Safety Nets Section 8. FIES
Data cleaning began during the data collection process. Inputs for the cleaning process include available interviewers’ note following each question item, interviewers’ note at the end of the tablet form as well as supervisors’ note during monitoring. The data cleaning process was conducted in following steps:
• Append households interviewed in ethnic minority languages with the main dataset interviewed in Vietnamese.
• Remove unnecessary variables which were automatically calculated by SurveyCTO
• Remove household duplicates in the dataset where the same form is submitted more than once.
• Remove observations of households which were not supposed to be interviewed following the identified replacement procedure.
• Format variables as their object type (string, integer, decimal, etc.)
• Read through interviewers’ note and make adjustment accordingly. During interviews, whenever interviewers find it difficult to choose a correct code, they are recommended to choose the most appropriate one and write down respondents’ answer in detail so that the survey management team will justify and make a decision which code is best suitable for such answer.
• Correct data based on supervisors’ note where enumerators entered wrong code.
• Recode answer option “Other, please specify”. This option is usually followed by a blank line allowing enumerators to type or write texts to specify the answer. The data cleaning team checked thoroughly this type of answers to decide whether each answer needed recoding into one of the available categories or just keep the answer originally recorded. In some cases, that answer could be assigned a completely new code if it appeared many times in the survey dataset.
• Examine data accuracy of outlier values, defined as values that lie outside both 5th and 95th percentiles, by listening to interview recordings.
• Final check on matching main dataset with different sections, where information is asked on individual level, are kept in separate data files and in long form.
• Label variables using the full question text.
• Label variable values where necessary.