Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
kemuriririn/index-tts-2-examples dataset hosted on Hugging Face and contributed by the HF Datasets community
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Reference: https://www.zillow.com/research/zhvi-methodology/
In setting out to create a new home price index, a major problem Zillow sought to overcome in existing indices was their inability to deal with the changing composition of properties sold in one time period versus another time period. Both a median sale price index and a repeat sales index are vulnerable to such biases (see the analysis here for an example of how influential the bias can be). For example, if expensive homes sell at a disproportionately higher rate than less expensive homes in one time period, a median sale price index will characterize this market as experiencing price appreciation relative to the prior period of time even if the true value of homes is unchanged between the two periods.
The ideal home price index would be based off sale prices for the same set of homes in each time period so there was never an issue of the sales mix being different across periods. This approach of using a constant basket of goods is widely used, common examples being a commodity price index and a consumer price index. Unfortunately, unlike commodities and consumer goods, for which we can observe prices in all time periods, we can’t observe prices on the same set of homes in all time periods because not all homes are sold in every time period.
The innovation that Zillow developed in 2005 was a way of approximating this ideal home price index by leveraging the valuations Zillow creates on all homes (called Zestimates). Instead of actual sale prices on every home, the index is created from estimated sale prices on every home. While there is some estimation error associated with each estimated sale price (which we report here), this error is just as likely to be above the actual sale price of a home as below (in statistical terms, this is referred to as minimal systematic error). Because of this fact, the distribution of actual sale prices for homes sold in a given time period looks very similar to the distribution of estimated sale prices for this same set of homes. But, importantly, Zillow has estimated sale prices not just for the homes that sold, but for all homes even if they didn’t sell in that time period. From this data, a comprehensive and robust benchmark of home value trends can be computed which is immune to the changing mix of properties that sell in different periods of time (see Dorsey et al. (2010) for another recent discussion of this approach).
For an in-depth comparison of the Zillow Home Value Index to the Case Shiller Home Price Index, please refer to the Zillow Home Value Index Comparison to Case-Shiller
Each Zillow Home Value Index (ZHVI) is a time series tracking the monthly median home value in a particular geographical region. In general, each ZHVI time series begins in April 1996. We generate the ZHVI at seven geographic levels: neighborhood, ZIP code, city, congressional district, county, metropolitan area, state and the nation.
Estimated sale prices (Zestimates) are computed based on proprietary statistical and machine learning models. These models begin the estimation process by subdividing all of the homes in United States into micro-regions, or subsets of homes either near one another or similar in physical attributes to one another. Within each micro-region, the models observe recent sale transactions and learn the relative contribution of various home attributes in predicting the sale price. These home attributes include physical facts about the home and land, prior sale transactions, tax assessment information and geographic location. Based on the patterns learned, these models can then estimate sale prices on homes that have not yet sold.
The sale transactions from which the models learn patterns include all full-value, arms-length sales that are not foreclosure resales. The purpose of the Zestimate is to give consumers an indication of the fair value of a home under the assumption that it is sold as a conventional, non-foreclosure sale. Similarly, the purpose of the Zillow Home Value Index is to give consumers insight into the home value trends for homes that are not being sold out of foreclosure status. Zillow research indicates that homes sold as foreclosures have typical discounts relative to non-foreclosure sales of between 20 and 40 percent, depending on the foreclosure saturation of the market. This is not to say that the Zestimate is not influenced by foreclosure resales. Zestimates are, in fact, influenced by foreclosure sales, but the pathway of this influence is through the downward pressure foreclosure sales put on non-foreclosure sale prices. It is the price signal observed in the latter that we are attempting to measure and, in turn, predict with the Zestimate.
Market Segments Within each region, we calculate the ZHVI for various subsets of homes (or mar...
Facebook
TwitterApollo 17 Coarse Fines (4-10 mm): Sample Location, Classification and Photo Index; C. Meyer
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The subscript industry is abbreviated ind and overall is abbreviated all. For each scenario, the first row of values are the association measures, while the second row (in parentheses) are association measures divided by their respective values of . Note that scenario A provides the calculation of these maximum values. Summations in subscripts refer to aggregations/projections along one or more dimensions of each array. Furthermore, , which is the association measure of the aggregation/projection across the dimensions of industry and sex, i.e. all the dimensions other than race. Likewise, is the association measure after collapsing all dimensions other than sex.
Facebook
TwitterThe Consumer price surveys primarily provide the following: Data on CPI in Palestine covering the West Bank, Gaza Strip and Jerusalem J1 for major and sub groups of expenditure. Statistics needed for decision-makers, planners and those who are interested in the national economy. Contribution to the preparation of quarterly and annual national accounts data.
Consumer Prices and indices are used for a wide range of purposes, the most important of which are as follows: Adjustment of wages, government subsidies and social security benefits to compensate in part or in full for the changes in living costs. To provide an index to measure the price inflation of the entire household sector, which is used to eliminate the inflation impact of the components of the final consumption expenditure of households in national accounts and to dispose of the impact of price changes from income and national groups. Price index numbers are widely used to measure inflation rates and economic recession. Price indices are used by the public as a guide for the family with regard to its budget and its constituent items. Price indices are used to monitor changes in the prices of the goods traded in the market and the consequent position of price trends, market conditions and living costs. However, the price index does not reflect other factors affecting the cost of living, e.g. the quality and quantity of purchased goods. Therefore, it is only one of many indicators used to assess living costs. It is used as a direct method to identify the purchasing power of money, where the purchasing power of money is inversely proportional to the price index.
Palestine West Bank Gaza Strip Jerusalem
The target population for the CPI survey is the shops and retail markets such as grocery stores, supermarkets, clothing shops, restaurants, public service institutions, private schools and doctors.
The target population for the CPI survey is the shops and retail markets such as grocery stores, supermarkets, clothing shops, restaurants, public service institutions, private schools and doctors.
Sample survey data [ssd]
A non-probability purposive sample of sources from which the prices of different goods and services are collected was updated based on the establishment census 2017, in a manner that achieves full coverage of all goods and services that fall within the Palestinian consumer system. These sources were selected based on the availability of the goods within them. It is worth mentioning that the sample of sources was selected from the main cities inside Palestine: Jenin, Tulkarm, Nablus, Qalqiliya, Ramallah, Al-Bireh, Jericho, Jerusalem, Bethlehem, Hebron, Gaza, Jabalia, Dier Al-Balah, Nusseirat, Khan Yunis and Rafah. The selection of these sources was considered to be representative of the variation that can occur in the prices collected from the various sources. The number of goods and services included in the CPI is approximately 730 commodities, whose prices were collected from 3,200 sources. (COICOP) classification is used for consumer data as recommended by the United Nations System of National Accounts (SNA-2008).
Not apply
Computer Assisted Personal Interview [capi]
A tablet-supported electronic form was designed for price surveys to be used by the field teams in collecting data from different governorates, with the exception of Jerusalem J1. The electronic form is supported with GIS, and GPS mapping technique that allow the field workers to locate the outlets exactly on the map and the administrative staff to manage the field remotely. The electronic questionnaire is divided into a number of screens, namely: First screen: shows the metadata for the data source, governorate name, governorate code, source code, source name, full source address, and phone number. Second screen: shows the source interview result, which is either completed, temporarily paused or permanently closed. It also shows the change activity as incomplete or rejected with the explanation for the reason of rejection. Third screen: shows the item code, item name, item unit, item price, product availability, and reason for unavailability. Fourth screen: checks the price data of the related source and verifies their validity through the auditing rules, which was designed specifically for the price programs. Fifth screen: saves and sends data through (VPN-Connection) and (WI-FI technology).
In case of the Jerusalem J1 Governorate, a paper form has been designed to collect the price data so that the form in the top part contains the metadata of the data source and in the lower section contains the price data for the source collected. After that, the data are entered into the price program database.
The price survey forms were already encoded by the project management depending on the specific international statistical classification of each survey. After the researcher collected the price data and sent them electronically, the data was reviewed and audited by the project management. Achievement reports were reviewed on a daily and weekly basis. Also, the detailed price reports at data source levels were checked and reviewed on a daily basis by the project management. If there were any notes, the researcher was consulted in order to verify the data and call the owner in order to correct or confirm the information.
At the end of the data collection process in all governorates, the data will be edited using the following process: Logical revision of prices by comparing the prices of goods and services with others from different sources and other governorates. Whenever a mistake is detected, it should be returned to the field for correction. Mathematical revision of the average prices for items in governorates and the general average in all governorates. Field revision of prices through selecting a sample of the prices collected from the items.
Not apply
The findings of the survey may be affected by sampling errors due to the use of samples in conducting the survey rather than total enumeration of the units of the target population, which increases the chances of variances between the actual values we expect to obtain from the data if we had conducted the survey using total enumeration. The computation of differences between the most important key goods showed that the variation of these goods differs due to the specialty of each survey. The variance of the key goods in the computed and disseminated CPI survey that was carried out on the Palestine level was for reasons related to sample design and variance calculation of different indicators since there was a difficulty in the dissemination of results by governorates due to lack of weights. Non-sampling errors are probable at all stages of data collection or data entry. Non-sampling errors include: Non-response errors: the selected sources demonstrated a significant cooperation with interviewers; so, there wasn't any case of non-response reported during 2019. Response errors (respondent), interviewing errors (interviewer), and data entry errors: to avoid these types of errors and reduce their effect to a minimum, project managers adopted a number of procedures, including the following: More than one visit was made to every source to explain the objectives of the survey and emphasize the confidentiality of the data. The visits to data sources contributed to empowering relations, cooperation, and the verification of data accuracy. Interviewer errors: a number of procedures were taken to ensure data accuracy throughout the process of field data compilation: Interviewers were selected based on educational qualification, competence, and assessment. Interviewers were trained theoretically and practically on the questionnaire. Meetings were held to remind interviewers of instructions. In addition, explanatory notes were supplied with the surveys. A number of procedures were taken to verify data quality and consistency and ensure data accuracy for the data collected by a questioner throughout processing and data entry (knowing that data collected through paper questionnaires did not exceed 5%): Data entry staff was selected from among specialists in computer programming and were fully trained on the entry programs. Data verification was carried out for 10% of the entered questionnaires to ensure that data entry staff had entered data correctly and in accordance with the provisions of the questionnaire. The result of the verification was consistent with the original data to a degree of 100%. The files of the entered data were received, examined, and reviewed by project managers before findings were extracted. Project managers carried out many checks on data logic and coherence, such as comparing the data of the current month with that of the previous month, and comparing the data of sources and between governorates. Data collected by tablet devices were checked for consistency and accuracy by applying rules at item level to be checked.
Other technical procedures to improve data quality: Seasonal adjustment processes and estimations of non-available items' prices: Under each category, a number of common items are used in Palestine to calculate the price levels and to represent the commodity within the commodity group. Of course, it is
Facebook
Twitterhttp://inspire.ec.europa.eu/metadata-codelist/LimitationsOnPublicAccess/INSPIRE_Directive_Article13_1dhttp://inspire.ec.europa.eu/metadata-codelist/LimitationsOnPublicAccess/INSPIRE_Directive_Article13_1d
The map shows the localities where samples that form part of the BGS rock collections have been taken. Many of these samples are from surface exposure, and were collected by BGS geologists during the course of geological mapping programmes. Others are from onshore boreholes or from mine and quarry workings. The principal collections are the E (England and Wales), S (Scotland), N (continuation of the S collection) and the MR (miscellaneous). The collections, which are held at the BGS offices at Keyworth (Nottingham) and Edinburgh, comprise both hand specimens and thin sections, although in individual samples either may not be immediately available. Users may also note that the BGS holds major collections of borehole cores and hand specimens as well as over a million palaeontological samples. The Britrocks database provides an index to these collections. With over 120,000 records, it now holds data for some 70% of the entire collections, including the UK samples shown in this application as well as rocks from overseas locations and reference minerals. The collections are continuously being added to and sample records from archived registers are also being copied into the electronic database. Map coverage is thin in some areas where copying from original paper registers has not been completed. Further information on Britrocks samples in these and other areas can be obtained from the Chief Curator at the BGS Keyworth (Nottingham) office or from the rock curator at the BGS Murchison House (Edinburgh) office.
Facebook
TwitterPublic Domain Mark 1.0https://creativecommons.org/publicdomain/mark/1.0/
License information was derived automatically
This copy of the Index to Marine and Lacustrine Geological Samples (IMLGS) was created on April 22, 2025 before its decommission on May 5, 2025. In addition to the csv file of the sample data, this deposit includes the html of the original NCEI page (https://www.ncei.noaa.gov/products/index-marine-lacustrine-samples,) a webarchive of metadata provided by NCEI from https://data.noaa.gov//metaview/page?xml=NOAA/NESDIS/NGDC/MGG/Geology/iso/xml/G00028.xml&view=getDataView&header=none, and an ML Commons Croissant metadata file that was generated for the csv file. The keywords below come from the NCEI dataset overview page. The Croissant file contains basic information about the columns. See the NCEI overview for more context on this dataset. Original Description from dataset landing page (https://www.ncei.noaa.gov/products/index-marine-lacustrine-samples,): The Index to Marine and Lacustrine Geological Samples (IMLGS) is a community designed and maintained resource that enables scientists to discover and access geological material from seabed and lakebed cores, grabs, and dredges archived at participating institutions from around the world. Sample material is available directly from each repository. Before proposing research on any sample, please contact the repository’s curator for sample condition and availability. Each repository submits data gleaned from physical samples to the IMLGS database, which is maintained by NOAA's National Centers for Environmental Information (NCEI). All sample data include basic collection and storage information, whereas some samples, at the discretion of the curator, may also include lithology, texture, age, mineralogy, weathering, metamorphism, glass remarks, color, physiographic province, principal investigator, and/or descriptive information. The public can access the IMLGS database by using NOAA NCEI’s data access resources.
Facebook
TwitterThe data shows the location of seabed and sub-seabed samples collected from the UK continental shelf, held by BGS. A BGS Sample Station is a general location at which sampling with one or more equipment types, such as borehole, grab, dredge, has been used. Historically, all deployment of equipment was recorded with the same coordinates so the data shown here will often show several sets of data at the same location. Newer data will begin to show distinct locations based on an equipment type. This layer shows all the BGS Sample Station Locations, including those where the Sampling was unsuccessful. The layers below are divided into distinct equipment types, plus a separate layer for unsuccessful sampling. BGS Sample Station Locations can have a wide range of potential information available. This can vary from a basic description derived from a simple piece of paper up to a complex set of information with a number of datasets. These datasets can include particle size analysis, geotechnical parameters, detailed marine geology, geochemical analysis and others. Prices are available on further enquiry.
Facebook
TwitterThe Index to Marine and Lacustrine Geological Samples (IMLGS) product is being decommissioned. Archived IMLGS data will only be available via an archive request to ncei.info@noaa.gov. The planned retirement date is May 5th, 2025.
The Index to Marine and Lacustrine Geological Samples (IMLGS) describes and provides access to ocean floor and lakebed rock and sediment samples curated by participating institutional and government repositories in the U.S., Canada, the United Kingdom, France, and Germany. Each curatorial facility prepares and submits data about their own collection to NCEI for inclusion in the IMLGS. NCEI, on behalf of the Curator community, maintains the IMLGS database and a dedicated web application for data discovery and access. Physical material from most samples may be requested from the responsible Curator for scientific research using contact information provided in IMLGS data listings. As of July 2023, the IMLGS includes information for 228,785 discrete seabed and lakebed cores, grabs, dredges, and drill holes worldwide.
Minimum sample information required for the IMLGS includes ship/platform name, cruise ID, sample ID, sampling device, and latitude/longitude. Water depth, collection date, storage method, and principal investigator are usually included. Core dimensions and depth to top and bottom of interval is available for many core samples. Descriptions, comments, physiographic province, lithology, texture, mineralogy, other components, glass remarks, metamorphism information, weathering information, color, and geologic age are included for some samples. An International Generic Sample Number (IGSN) is included, if available. Links are also provided to related data and images at NCEI, partner institutions, and to other sources of information including the System for Earth SAmple Registration (SESAR) and the Rolling Deck to Repository (R2R).
The IMLGS database was initially designed by a group of Curators, in cooperation with NGDC (now NCEI), at a series of meetings sponsored by the U.S. National Science Foundation (NSF) beginning in 1977. The Curators group continues to meet annually to share best practices and oversee the IMLGS database.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Leaf Area Index (LAI) is a fundamental vegetation structural variable that drives energy and mass exchanges between the plant and the atmosphere. Moderate-resolution (300m – 7km) global LAI data products have been widely applied to track global vegetation changes, drive Earth system models, monitor crop growth and productivity, etc. Yet, cutting-edge applications in climate adaptation, hydrology, and sustainable agriculture require LAI information at higher spatial resolution (< 100m) to model and understand heterogeneous landscapes.
This dataset was built to assist a machine-learning-based approach for mapping LAI from 30m-resolution Landsat images across the contiguous US (CONUS). The data was derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) Version 6 LAI/FPAR, Landsat Collection 1 surface reflectance, and NLCD Land Cover datasets over 2006 – 2018 using Google Earth Engine. Each record/sample/row includes a MODIS LAI value, corresponding Landsat surface reflectance in green, red, NIR, SWIR1 bands, a land cover (biome) type, geographic location, and other auxiliary information. Each sample represents a MODIS LAI pixel (500m) within which a single biome type dominates 90% of the area. The spatial homogeneity of the samples was further controlled by a screening process based on the coefficient of variation of the Landsat surface reflectance. In total, there are approximately 1.6 million samples, stratified by biome, Landsat sensor, and saturation status from the MODIS LAI algorithm. This dataset can be used to train machine learning models and generate LAI maps for Landsat 5, 7, 8 surface reflectance images within CONUS. Detailed information on the sample generation and quality control can be found in the related journal article. Resources in this dataset:Resource Title: README. File Name: LAI_train_samples_CONUS_README.txtResource Description: Description and metadata of the main datasetResource Software Recommended: Notepad,url: https://www.microsoft.com/en-us/p/windows-notepad/9msmlrh6lzf3?activetab=pivot:overviewtab Resource Title: LAI_training_samples_CONUS. File Name: LAI_train_samples_CONUS_v0.1.1.csvResource Description: This CSV file consists of the training samples for estimating Leaf Area Index based on Landsat surface reflectance images (Collection 1 Tire 1). Each sample has a MODIS LAI value and corresponding surface reflectance derived from Landsat pixels within the MODIS pixel.
Contact: Yanghui Kang (kangyanghui@gmail.com)
Column description
UID: Unique identifier. Format: LATITUDE_LONGITUDE_SENSOR_PATHROW_DATE
Landsat_ID: Landsat image ID
Date: Landsat image date in "YYYYMMDD"
Latitude: Latitude (WGS84) of the MODIS LAI pixel center
Longitude: Longitude (WGS84) of the MODIS LAI pixel center
MODIS_LAI: MODIS LAI value in "m2/m2"
MODIS_LAI_std: MODIS LAI standard deviation in "m2/m2"
MODIS_LAI_sat: 0 - MODIS Main (RT) method used no saturation; 1 - MODIS Main (RT) method with saturation
NLCD_class: Majority class code from the National Land Cover Dataset (NLCD)
NLCD_frequency: Percentage of the area cover by the majority class from NLCD
Biome: Biome type code mapped from NLCD (see below for more information)
Blue: Landsat surface reflectance in the blue band
Green: Landsat surface reflectance in the green band
Red: Landsat surface reflectance in the red band
Nir: Landsat surface reflectance in the near infrared band
Swir1: Landsat surface reflectance in the shortwave infrared 1 band
Swir2: Landsat surface reflectance in the shortwave infrared 2 band
Sun_zenith: Solar zenith angle from the Landsat image metadata. This is a scene-level value.
Sun_azimuth: Solar azimuth angle from the Landsat image metadata. This is a scene-level value.
NDVI: Normalized Difference Vegetation Index computed from Landsat surface reflectance
EVI: Enhanced Vegetation Index computed from Landsat surface reflectance
NDWI: Normalized Difference Water Index computed from Landsat surface reflectance
GCI: Green Chlorophyll Index = Nir/Green - 1
Biome code
1 - Deciduous Forest
2 - Evergreen Forest
3 - Mixed Forest
4 - Shrubland
5 - Grassland/Pasture
6 - Cropland
7 - Woody Wetland
8 - Herbaceous Wetland
Reference Dataset: All data was accessed through Google Earth Engine Gorelick, N., Hancher, M., Dixon, M., Ilyushchenko, S., Thau, D., & Moore, R. (2017). Google Earth Engine: Planetary-scale geospatial analysis for everyone. Remote Sensing of Environment. MODIS Version 6 Leaf Area Index/FPAR 4-day L5 Global 500m Myneni, R., Y. Knyazikhin, T. Park. MOD15A2H MODIS/Terra Leaf Area Index/FPAR 8-Day L4 Global 500m SIN Grid V006. 2015, distributed by NASA EOSDIS Land Processes DAAC, https://doi.org/10.5067/MODIS/MOD15A2H.006 Landsat 5/7/8 Collection 1 Surface Reflectance Landsat Level-2 Surface Reflectance Science Product courtesy of the U.S. Geological Survey. Masek, J.G., Vermote, E.F., Saleous N.E., Wolfe, R., Hall, F.G., Huemmrich, K.F., Gao, F., Kutler, J., and Lim, T-K. (2006). A Landsat surface reflectance dataset for North America, 1990–2000. IEEE Geoscience and Remote Sensing Letters 3(1):68-72. http://dx.doi.org/10.1109/LGRS.2005.857030. Vermote, E., Justice, C., Claverie, M., & Franch, B. (2016). Preliminary analysis of the performance of the Landsat 8/OLI land surface reflectance product. Remote Sensing of Environment. http://dx.doi.org/10.1016/j.rse.2016.04.008. National Land Cover Dataset (NLCD) Yang, Limin, Jin, Suming, Danielson, Patrick, Homer, Collin G., Gass, L., Bender, S.M., Case, Adam, Costello, C., Dewitz, Jon A., Fry, Joyce A., Funk, M., Granneman, Brian J., Liknes, G.C., Rigge, Matthew B., Xian, George, A new generation of the United States National Land Cover Database—Requirements, research priorities, design, and implementation strategies: ISPRS Journal of Photogrammetry and Remote Sensing, v. 146, p. 108–123, at https://doi.org/10.1016/j.isprsjprs.2018.09.006 Resource Software Recommended: Microsoft Excel,url: https://www.microsoft.com/en-us/microsoft-365/excel
Facebook
TwitterThis dataset contains data collected within limestone cedar glades at Stones River National Battlefield (STRI) near Murfreesboro, Tennessee. This dataset contains information on soil microbial metabolic diversity for soil samples obtained from certain quadrat locations (points) within 12 selected cedar glades. This information derives from substrate utilization profiles based on Biolog EcoPlates (Biolog, Inc., Hayward, CA, USA) which were inoculated with soil slurries containing the entire microbial community present in each soil sample. EcoPlates contain 31 sole-carbon substrates (present in triplicate on each plate) and one blank (control) well. Once the microbial community from a soil sample is inoculated onto the plates, the plates are incubated and absorbance readings are taken at intervals.For each quadrat location (point), one soil sample was obtained under sterile conditions, using a trowel wiped with methanol and rinsed with distilled water, and was placed into an autoclaved jar with a tight-fitting lid and placed on ice. Soil samples were transported to lab facilities on ice and immediately refrigerated. Within 24 hours after being removed from the field, soil samples were processed for community level physiological profiling (CLPP) using Biolog EcoPlates. First, for each soil sample three measurements were taken of gravimetric soil water content using a Mettler Toledo HB43 halogen moisture analyzer (Mettler Toledo, Columbus, OH, USA) and the mean of these three SWC measurements was used to calculate the 10-gram dry weight equivalent (DWE) for each soil sample. For each soil sample, a 10-gram DWE of fresh soil was added to 90 milliliters of sterile buffer solution in a 125-milliliter plastic bottle to make the first dilution. Bottles were agitated on a wrist-action shaker for 20 minutes, and a 10-milliliter aliquot was taken from each sample using sterilized pipette tips and added to 90 milliliters of sterile buffer solution to make the second dilution. The bottle containing the second dilution for each sample was agitated for 10 seconds by hand, poured into a sterile tray, and the second dilution was inoculated directly onto Biolog EcoPlates using a sterilized pipette set to deliver 150 microliters into each well. Each plate was immediately covered, placed in a covered box and incubated in the dark at 25 degrees Celcius. Catabolism of each carbon substrate produced a proportional color change response (from the color of the inoculant to dark purple) due to the activity of the redox dye tetrazolium violot (present in all wells including blanks). Plates were read at intervals of 24 hours, 48 hours, 72 hours, 96 hours and 120 hours after inoculation using a Biolog MicroStation plate reader (Biolog, Inc., Hayward, CA, USA) reading absorbance at 590 nanometers.For each soil sample and at each incubation time point, raw absorbance values were transformed according to the equations:T = (C-R) / AWCD; and AWCD = [Σ (C – R)] / nwhere T represents transformed substrate-level response values, C is the absorbance value of control wells (mean of 3 controls), R is the mean absorbance of the response wells (3 wells per carbon substrate), AWCD is average well color development for the plate, and n is the number of carbon substrates (31 for EcoPlates). To integrate time-series data from multiple EcoPlate readings (both for AWCD and also for individual substrates, T), the area under the incubation curve, from 48 hours to 120 hours of incubation time, was calculated.To assess community-level microbial diversity, the Shannon-Weaver index (H) was calculated as follows:H = - ∑ p(ln p)where p is the ratio of the activity of each substrate (T values, area under the incubation curve) to the sum of the activities of all substrates for a given EcoPlate. Thus, the numeric values contained in the fields of this dataset represent H values (Shannon-Weaver index of diversity) based on substrate-utilization diversity of the entire microbial community of each soil sample. Higher values indicate that the entire microbial community metabolized a greater diversity of substrates present on the EcoPlates during the incubation period under consideration. Detailed descriptions of experimental design, field data collection procedures, laboratory procedures, and data analysis are presented in Cartwright (2014).References:Cartwright, J. (2014). Soil ecology of a rock outcrop ecosystem: abiotic stresses, soil respiration, and microbial community profiles in limestone cedar glades. Ph.D. dissertation, Tennessee State UniversityCofer, M., Walck, J., and Hidayati, S. (2008). Species richness and exotic species invasion in Middle Tennessee cedar glades in relation to abiotic and biotic factors. The Journal of the Torrey Botanical Society, 135(4), 540–553.Garland, J., & Mills, A. (1991). Classification and characterization of heterotrophic microbial communities on the basis of patterns of community-level sole-carbon-source utilization. Applied and environmental microbiology, 57(8), 2351–2359.Garland, J. (1997). Analysis and interpretation of community‐level physiological profiles in microbial ecology. FEMS Microbiology Ecology, 24, 289–300.Hackett, C. A., & Griffiths, B. S. (1997). Statistical analysis of the time-course of Biolog substrate utilization. Journal of Microbiological Methods, 30(1), 63–69.Insam, H. (1997). A new set of substrates proposed for community characterization in environmental samples. In H. Insam & A. Rangger (Eds.), Microbial Communities: Functional versus Structural Approaches(pp. 259–260). New York: Springer.Preston-Mafham, J., Boddy, L., & Randerson, P. F. (2002). Analysis of microbial community functional diversity using sole-carbon-source utilisation profiles - a critique. FEMS microbiology ecology, 42(1), 1–14. doi:10.1111/j.1574-6941.2002.tb00990.x
Facebook
TwitterTransition to free economic structure and, as a consequence, processes of privatization of large agricultural and industrial organizations and birth of numerous new economic entities led to significant changes in quantitative and qualitative characteristics of industrial organizations and peasant farms in RA. During the last decade and especially the last 4-5 years, the structural changes, in their turn, caused also certain complications in the mentioned fields in terms of ensuring collection, comprehensiveness and reliability of statistical data on prices and pricing.
In particular, in case of radical structural changes, international recommendations require the weights upon which price indexes are based to be periodically updated. In order to have a real picture and dynamics of the present situation on creation of indicators for new base year, i.e. collection of information on set of goods-representatives, their weights, average annual prices, prices and price changes, it would be necessary to periodically conduct sample surveys for further improvement of the methodology for price index calculation.
The objectives of the survey were: • to improve the sample, develop a new sample, • to revise the base year and weights, • to receive additional information on prices of sales of industrial, agricultural product and purchase (acquisition of production means) in RA, • to improve methodology for price observation and calculation of price indexes (survey technology, price and other necessary data collection, processing, analyzing), • to revise the base year for producer price indexes, components structure, shares, calculation mechanism, etc., • to derive price indexes that would be in line with the international definitions, standards and classifications, • to complement the NSS RA price indexes database and create preconditions for its regular updating, • to update the information on economic units covered by price indexes calculation, • to ensure use of international standards and classifications in statistics, • to form preconditions for extension of sample observation mechanisms in the state statistics.
Besides the above mentioned, the need of the given survey was also stipulated by the following reasons: - a great mobility of micro-sized, small and medium-sized organizations mainly caused by increased speed of their births, activity and produced commodity changes or deaths that decreases the opportunity to create long-term fixed-base time series of prices and price indexes, - According to the CPA classification coding and recoding activities related to the introduction of Armenian classification of economic activities - NACE (based on the European Communities’ NACE classification).
National
Sample survey data [ssd]
SAMPLE DESIGN
Agriculture The sample of the survey was desighned in the conditions of lack of farm register. The number of peasant farms was calculated and derived by database analysis. The number of villages (quotas) selected from each marz was determined taking into account the percent of rural population of marzes. The villages from marz were selected randomly. The peasant farms covered by the survey were selected based on number of privatized plots. The survey was carried out in 200 rural communities selected from 10 marzes, in 5-20 households from each community. Pilot survey was conducted with 1 901 farms in the sample.
Industry The sample frame for the survey was designed as follows: 1. The industrial organizations with share 5 and more percent have been selected by reduction method from fifth level (each subsection) of NACE for whole RA industry. 476 out of 2231 industrial organizations covered by statistical observation were selected for pilot survey.
Face-to-face [f2f]
Facebook
TwitterThe Consumer price surveys primarily provide the following: Data on CPI in Palestine covering the West Bank, Gaza Strip and Jerusalem J1 for major and sub groups of expenditure. Statistics needed for decision-makers, planners and those who are interested in the national economy. Contribution to the preparation of quarterly and annual national accounts data.
Consumer Prices and indices are used for a wide range of purposes, the most important of which are as follows: Adjustment of wages, government subsidies and social security benefits to compensate in part or in full for the changes in living costs. To provide an index to measure the price inflation of the entire household sector, which is used to eliminate the inflation impact of the components of the final consumption expenditure of households in national accounts and to dispose of the impact of price changes from income and national groups. Price index numbers are widely used to measure inflation rates and economic recession. Price indices are used by the public as a guide for the family with regard to its budget and its constituent items. Price indices are used to monitor changes in the prices of the goods traded in the market and the consequent position of price trends, market conditions and living costs. However, the price index does not reflect other factors affecting the cost of living, e.g. the quality and quantity of purchased goods. Therefore, it is only one of many indicators used to assess living costs. It is used as a direct method to identify the purchasing power of money, where the purchasing power of money is inversely proportional to the price index.
Palestine West Bank Gaza Strip Jerusalem
The target population for the CPI survey is the shops and retail markets such as grocery stores, supermarkets, clothing shops, restaurants, public service institutions, private schools and doctors.
The target population for the CPI survey is the shops and retail markets such as grocery stores, supermarkets, clothing shops, restaurants, public service institutions, private schools and doctors.
Sample survey data [ssd]
A non-probability purposive sample of sources from which the prices of different goods and services are collected was updated based on the establishment census 2017, in a manner that achieves full coverage of all goods and services that fall within the Palestinian consumer system. These sources were selected based on the availability of the goods within them. It is worth mentioning that the sample of sources was selected from the main cities inside Palestine: Jenin, Tulkarm, Nablus, Qalqiliya, Ramallah, Al-Bireh, Jericho, Jerusalem, Bethlehem, Hebron, Gaza, Jabalia, Dier Al-Balah, Nusseirat, Khan Yunis and Rafah. The selection of these sources was considered to be representative of the variation that can occur in the prices collected from the various sources. The number of goods and services included in the CPI is approximately 730 commodities, whose prices were collected from 3,200 sources. (COICOP) classification is used for consumer data as recommended by the United Nations System of National Accounts (SNA-2008).
Not apply
Computer Assisted Personal Interview [capi]
A tablet-supported electronic form was designed for price surveys to be used by the field teams in collecting data from different governorates, with the exception of Jerusalem J1. The electronic form is supported with GIS, and GPS mapping technique that allow the field workers to locate the outlets exactly on the map and the administrative staff to manage the field remotely. The electronic questionnaire is divided into a number of screens, namely: First screen: shows the metadata for the data source, governorate name, governorate code, source code, source name, full source address, and phone number. Second screen: shows the source interview result, which is either completed, temporarily paused or permanently closed. It also shows the change activity as incomplete or rejected with the explanation for the reason of rejection. Third screen: shows the item code, item name, item unit, item price, product availability, and reason for unavailability. Fourth screen: checks the price data of the related source and verifies their validity through the auditing rules, which was designed specifically for the price programs. Fifth screen: saves and sends data through (VPN-Connection) and (WI-FI technology).
In case of the Jerusalem J1 Governorate, a paper form has been designed to collect the price data so that the form in the top part contains the metadata of the data source and in the lower section contains the price data for the source collected. After that, the data are entered into the price program database.
The price survey forms were already encoded by the project management depending on the specific international statistical classification of each survey. After the researcher collected the price data and sent them electronically, the data was reviewed and audited by the project management. Achievement reports were reviewed on a daily and weekly basis. Also, the detailed price reports at data source levels were checked and reviewed on a daily basis by the project management. If there were any notes, the researcher was consulted in order to verify the data and call the owner in order to correct or confirm the information.
At the end of the data collection process in all governorates, the data will be edited using the following process: Logical revision of prices by comparing the prices of goods and services with others from different sources and other governorates. Whenever a mistake is detected, it should be returned to the field for correction. Mathematical revision of the average prices for items in governorates and the general average in all governorates. Field revision of prices through selecting a sample of the prices collected from the items.
Not apply
The findings of the survey may be affected by sampling errors due to the use of samples in conducting the survey rather than total enumeration of the units of the target population, which increases the chances of variances between the actual values we expect to obtain from the data if we had conducted the survey using total enumeration. The computation of differences between the most important key goods showed that the variation of these goods differs due to the specialty of each survey. The variance of the key goods in the computed and disseminated CPI survey that was carried out on the Palestine level was for reasons related to sample design and variance calculation of different indicators since there was a difficulty in the dissemination of results by governorates due to lack of weights. Non-sampling errors are probable at all stages of data collection or data entry. Non-sampling errors include: Non-response errors: the selected sources demonstrated a significant cooperation with interviewers; so, there wasn't any case of non-response reported during 2019. Response errors (respondent), interviewing errors (interviewer), and data entry errors: to avoid these types of errors and reduce their effect to a minimum, project managers adopted a number of procedures, including the following: More than one visit was made to every source to explain the objectives of the survey and emphasize the confidentiality of the data. The visits to data sources contributed to empowering relations, cooperation, and the verification of data accuracy. Interviewer errors: a number of procedures were taken to ensure data accuracy throughout the process of field data compilation: Interviewers were selected based on educational qualification, competence, and assessment. Interviewers were trained theoretically and practically on the questionnaire. Meetings were held to remind interviewers of instructions. In addition, explanatory notes were supplied with the surveys. A number of procedures were taken to verify data quality and consistency and ensure data accuracy for the data collected by a questioner throughout processing and data entry (knowing that data collected through paper questionnaires did not exceed 5%): Data entry staff was selected from among specialists in computer programming and were fully trained on the entry programs. Data verification was carried out for 10% of the entered questionnaires to ensure that data entry staff had entered data correctly and in accordance with the provisions of the questionnaire. The result of the verification was consistent with the original data to a degree of 100%. The files of the entered data were received, examined, and reviewed by project managers before findings were extracted. Project managers carried out many checks on data logic and coherence, such as comparing the data of the current month with that of the previous month, and comparing the data of sources and between governorates. Data collected by tablet devices were checked for consistency and accuracy by applying rules at item level to be checked.
Other technical procedures to improve data quality: Seasonal adjustment processes and estimations of non-available items' prices: Under each category, a number of common items are used in Palestine to calculate the price levels and to represent the commodity within the commodity group. Of course, it is
Facebook
TwitterThe Consumer price surveys primarily provide the following: Data on CPI in Palestine covering the West Bank, Gaza Strip and Jerusalem J1 for major and sub groups of expenditure. Statistics needed for decision-makers, planners and those who are interested in the national economy. Contribution to the preparation of quarterly and annual national accounts data.
Consumer Prices and indices are used for a wide range of purposes, the most important of which are as follows: Adjustment of wages, government subsidies and social security benefits to compensate in part or in full for the changes in living costs. To provide an index to measure the price inflation of the entire household sector, which is used to eliminate the inflation impact of the components of the final consumption expenditure of households in national accounts and to dispose of the impact of price changes from income and national groups. Price index numbers are widely used to measure inflation rates and economic recession. Price indices are used by the public as a guide for the family with regard to its budget and its constituent items. Price indices are used to monitor changes in the prices of the goods traded in the market and the consequent position of price trends, market conditions and living costs. However, the price index does not reflect other factors affecting the cost of living, e.g. the quality and quantity of purchased goods. Therefore, it is only one of many indicators used to assess living costs. It is used as a direct method to identify the purchasing power of money, where the purchasing power of money is inversely proportional to the price index.
Palestine West Bank Gaza Strip Jerusalem
The target population for the CPI survey is the shops and retail markets such as grocery stores, supermarkets, clothing shops, restaurants, public service institutions, private schools and doctors.
The target population for the CPI survey is the shops and retail markets such as grocery stores, supermarkets, clothing shops, restaurants, public service institutions, private schools and doctors.
Sample survey data [ssd]
A non-probability purposive sample of sources from which the prices of different goods and services are collected was updated based on the establishment census 2017, in a manner that achieves full coverage of all goods and services that fall within the Palestinian consumer system. These sources were selected based on the availability of the goods within them. It is worth mentioning that the sample of sources was selected from the main cities inside Palestine: Jenin, Tulkarm, Nablus, Qalqiliya, Ramallah, Al-Bireh, Jericho, Jerusalem, Bethlehem, Hebron, Gaza, Jabalia, Dier Al-Balah, Nusseirat, Khan Yunis and Rafah. The selection of these sources was considered to be representative of the variation that can occur in the prices collected from the various sources. The number of goods and services included in the CPI is approximately 730 commodities, whose prices were collected from 3,200 sources. (COICOP) classification is used for consumer data as recommended by the United Nations System of National Accounts (SNA-2008).
Not apply
Computer Assisted Personal Interview [capi]
A tablet-supported electronic form was designed for price surveys to be used by the field teams in collecting data from different governorates, with the exception of Jerusalem J1. The electronic form is supported with GIS, and GPS mapping technique that allow the field workers to locate the outlets exactly on the map and the administrative staff to manage the field remotely. The electronic questionnaire is divided into a number of screens, namely: First screen: shows the metadata for the data source, governorate name, governorate code, source code, source name, full source address, and phone number. Second screen: shows the source interview result, which is either completed, temporarily paused or permanently closed. It also shows the change activity as incomplete or rejected with the explanation for the reason of rejection. Third screen: shows the item code, item name, item unit, item price, product availability, and reason for unavailability. Fourth screen: checks the price data of the related source and verifies their validity through the auditing rules, which was designed specifically for the price programs. Fifth screen: saves and sends data through (VPN-Connection) and (WI-FI technology).
In case of the Jerusalem J1 Governorate, a paper form has been designed to collect the price data so that the form in the top part contains the metadata of the data source and in the lower section contains the price data for the source collected. After that, the data are entered into the price program database.
The price survey forms were already encoded by the project management depending on the specific international statistical classification of each survey. After the researcher collected the price data and sent them electronically, the data was reviewed and audited by the project management. Achievement reports were reviewed on a daily and weekly basis. Also, the detailed price reports at data source levels were checked and reviewed on a daily basis by the project management. If there were any notes, the researcher was consulted in order to verify the data and call the owner in order to correct or confirm the information.
At the end of the data collection process in all governorates, the data will be edited using the following process: Logical revision of prices by comparing the prices of goods and services with others from different sources and other governorates. Whenever a mistake is detected, it should be returned to the field for correction. Mathematical revision of the average prices for items in governorates and the general average in all governorates. Field revision of prices through selecting a sample of the prices collected from the items.
Not apply
The findings of the survey may be affected by sampling errors due to the use of samples in conducting the survey rather than total enumeration of the units of the target population, which increases the chances of variances between the actual values we expect to obtain from the data if we had conducted the survey using total enumeration. The computation of differences between the most important key goods showed that the variation of these goods differs due to the specialty of each survey. For example, for the CPI, the variation between its goods was very low, except in some cases such as banana, tomato, and cucumber goods that had a high coefficient of variation during 2019 due to the high oscillation in their prices. The variance of the key goods in the computed and disseminated CPI survey that was carried out on the Palestine level was for reasons related to sample design and variance calculation of different indicators since there was a difficulty in the dissemination of results by governorates due to lack of weights. Non-sampling errors are probable at all stages of data collection or data entry. Non-sampling errors include: Non-response errors: the selected sources demonstrated a significant cooperation with interviewers; so, there wasn't any case of non-response reported during 2019. Response errors (respondent), interviewing errors (interviewer), and data entry errors: to avoid these types of errors and reduce their effect to a minimum, project managers adopted a number of procedures, including the following: More than one visit was made to every source to explain the objectives of the survey and emphasize the confidentiality of the data. The visits to data sources contributed to empowering relations, cooperation, and the verification of data accuracy. Interviewer errors: a number of procedures were taken to ensure data accuracy throughout the process of field data compilation: Interviewers were selected based on educational qualification, competence, and assessment. Interviewers were trained theoretically and practically on the questionnaire. Meetings were held to remind interviewers of instructions. In addition, explanatory notes were supplied with the surveys. A number of procedures were taken to verify data quality and consistency and ensure data accuracy for the data collected by a questioner throughout processing and data entry (knowing that data collected through paper questionnaires did not exceed 5%): Data entry staff was selected from among specialists in computer programming and were fully trained on the entry programs. Data verification was carried out for 10% of the entered questionnaires to ensure that data entry staff had entered data correctly and in accordance with the provisions of the questionnaire. The result of the verification was consistent with the original data to a degree of 100%. The files of the entered data were received, examined, and reviewed by project managers before findings were extracted. Project managers carried out many checks on data logic and coherence, such as comparing the data of the current month with that of the previous month, and comparing the data of sources and between governorates. Data collected by tablet devices were checked for consistency and accuracy by applying rules at item level to be checked.
Other technical procedures to improve data quality: Seasonal adjustment processes
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Index values for the four example graphs depicted in Figure 1.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Collapse risk assessment is an important basis for the prevention and control of geological disasters in mountainous areas. The existing research on collapse hazard is less, and there is still no further advancement in the evaluation of collapse hazard for the traditional indicator assignment method for the diversification of the assignment results of the indicators and the comprehensive evaluation method that cannot consider the ambiguity and randomness of the indicator data at the same time. In this paper, we utilize the respective advantages of the linear programming theory and the cloud model from the prevention and control point of view, and evaluate the collapse samples. Firstly, the weight interval of evaluation index is determined by improved analytic hierarchy process, entropy weight method and coefficient of variation method. Secondly, the linear programming algorithm is used to select the specific weight of each collapse sample when the risk is the largest in the interval. Finally, a comprehensive evaluation model of cloud model is constructed to determine the risk level of collapse. In this paper, 20 collapse samples counted by predecessors in G4217 Wenchuan-Lixian section are taken as research cases. The evaluation results of 20 collapse samples are compared with other evaluation methods and field survey conditions to prove the reliability and rationality of the method. The evaluation results show that 13 of the 20 collapse samples are extremely dangerous, 2 are highly dangerous, 4 are moderately dangerous, and 1 is lowly dangerous. Among them, the extremely dangerous collapse samples account for 65% of the total number of collapses. Compared with other methods, this method is more in line with the actual situation.
Facebook
TwitterThe dataset shows the primary data for the soil indicators covered in the article submission.
Facebook
TwitterNote: this tile layer is no longer being updated; please use the dynamic map service instead.The Index to Marine and Lacustrine Geological Samples Map Service provided by NOAA's National Centers for Environmental Information (NCEI) is an ArcGIS map service providing access to metadata, data, and images about seafloor and lakebed cores, grabs, and dredge samples (sediment and rock) curated by and available from participating institutions. How to cite: Curators of Marine and Lacustrine Geological Samples Consortium. The Index to Marine and Lacustrine Geological Samples (IMLGS). National Centers for Environmental Information, NOAA. doi:10.7289/V5H41PB8 [date of access].More information: Publisher's Landing PageThe interactive map viewer at NCEI allows viewing samples by individual institution, and is a recommended way of exploring the database. (NOAA GeoPlatform entry).Note: This is a tiled service, visible from global scales, to zoom level 10 (approx. 1:577,000 scale). The corresponding dynamic map service draws slower, but is visible at all scales, and allows filtering.Related Information and ServicesInteractive Map Viewer at NCEI (NOAA GeoPlatform entry)ISO Metadata WFS WMSArcGIS Map Service (dynamic)
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
The graph shows the changes in the g-index of ^ and the corresponding percentile for the sake of comparison with the entire literature. g-index is a scientometric index similar to g-index but put a more weight on the sum of citations. The g-index of a journal is g if the journal has published at least g papers with total citations of g2.
Facebook
TwitterThis dataset includes soil wet aggregate stability measurements from the Upper Mississippi River Basin LTAR site in Ames, Iowa. Samples were collected in 2021 from this long-term tillage and cover crop trial in a corn-based agroecosystem. We measured wet aggregate stability using digital photography to quantify disintegration (slaking) of submerged aggregates over time, similar to the technique described by Fajardo et al. (2016) and Rieke et al. (2021). However, we adapted the technique to larger sample numbers by using a multi-well tray to submerge 20-36 aggregates simultaneously. We used this approach to measure slaking index of 160 soil samples (2120 aggregates). This dataset includes slaking index calculated for each aggregates, and also summarized by samples. There were usually 10-12 aggregates measured per sample. We focused primarily on methodological issues, assessing the statistical power of slaking index, needed replication, sensitivity to cultural practices, and sensitivity to sample collection date. We found that small numbers of highly unstable aggregates lead to skewed distributions for slaking index. We concluded at least 20 aggregates per sample were preferred to provide confidence in measurement precision. However, the experiment had high statistical power with only 10-12 replicates per sample. Slaking index was not sensitive to the initial size of dry aggregates (3 to 10 mm diameter); therefore, pre-sieving soils was not necessary. The field trial showed greater aggregate stability under no-till than chisel plow practice, and changing stability over a growing season. These results will be useful to researchers and agricultural practitioners who want a simple, fast, low-cost method for measuring wet aggregate stability on many samples.
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
kemuriririn/index-tts-2-examples dataset hosted on Hugging Face and contributed by the HF Datasets community