Facebook
TwitterTL2CO2NS_7 is the Tropospheric Emission Spectrometer (TES)/Aura Level 2 Carbon Dioxide Nadir Special Observation Version 7 data product. TES Level 2 data contain retrieved species (or temperature) profiles at the observation targets and the estimated errors. The geolocation, quality, and other data (e.g., surface characteristics for nadir observations) are also provided. L2 modeled spectra are evaluated using radiative transfer modeling algorithms. The process, referred to as retrieval, compares observed spectra to the modeled spectra and iteratively updates the atmospheric parameters. L2 standard product files include information for one molecular species (or temperature) for an entire global survey or special observation run. A global survey consists of a maximum of 16 consecutive orbits. Nadir and limb observations are in separate L2 files, and a single ancillary file is composed of data that are common to both nadir and limb files. A nadir sequence within the TES Global Survey is a fixed number of observations within an orbit for a Global Survey. Prior to April 24, 2005, it consisted of two low resolution scans over the same ground locations. After April 24, 2005, Global Survey data consisted of three low resolution scans. The Nadir standard product consists of four files, where each file is composed of the Global Survey Nadir observations from one of four focal planes for a single orbit, i.e. 72 orbit sequences. The Global Survey Nadir observations currently only use a single set of filter mix. A Global Survey consists of observations along 16 consecutive orbits at the start of a two day cycle, over which 3,200 retrievals are performed. Each observation is the input for retrievals of species Volume Mixing Ratios (VMR), temperature profiles, surface temperature and other data parameters with associated pressure levels, precision, total error, vertical resolution, total column density and other diagnostic quantities. Each TES Level 2 standard product reports information in a swath format conforming to the HDF-EOS Aura File Format Guidelines. Each Swath object is bounded by the number of observations in a global survey and a predefined set of pressure levels representing slices through the atmosphere. Each standard product can have a variable number of observations depending upon the Global Survey configuration and whether averaging is employed. Also, missing or bad retrievals are not reported. The organization of data within the Swath object is based on a superset of the UARS pressure levels used to report concentrations of trace atmospheric gases. The reporting grid is the same pressure grid used for modeling. There are 67 reporting levels from 1211.53 hPa, which allows for very high surface pressure conditions, to 0.1 hPa, about 65 km. In addition, the products will report values directly at the surface when possible or at the observed cloud top level. Thus, in the Standard Product files each observation can potentially contain estimates for the concentration of a particular molecule at 67 different pressure levels within the atmosphere. However, for most retrieved profiles, the highest pressure levels are not observed due to a surface at lower pressure or cloud obscuration. For pressure levels corresponding to altitudes below the cloud top or surface, where measurements were not possible, a fill value will be applied. To minimize the duplication of information between the individual species standard products, data fields common to each species (such as spacecraft coordinates, emissivity, and other data fields) have been collected into a separate standard product, termed the TES L2 Ancillary Data product (ESDT short name: TL2ANC). Users of this product should also obtain the Ancillary Data product.
Facebook
TwitterTL2ATMTN_7 is the Tropospheric Emission Spectrometer (TES)/Aura Level 2 Atmospheric Temperatures Nadir Version 7 data product. TES was an instrument aboard NASA's Aura satellite and was launched from California on July 15, 2004. Data collection for TES is complete. TES Level 2 data contains retrieved species (or temperature) profiles at the observation targets and the estimated errors. The geolocation, quality, and other data (e.g., surface characteristics for nadir observations) were also provided. L2 modeled spectra were evaluated using radiative transfer modeling algorithms. The process, referred to as retrieval, compared observed spectra to the modeled spectra and iteratively updated the atmospheric parameters. L2 standard product files included information for one molecular species (or temperature) for an entire global survey or special observation run. A global survey consisted of a maximum of 16 consecutive orbits. Nadir and limb observations were added to separate L2 files, and a single ancillary file was composed of data that are common to both nadir and limb files. A Nadir sequence within the TES Global Survey was a fixed number of observations within an orbit for a Global Survey. Prior to April 24, 2005, it consisted of two low resolution scans over the same ground locations. After April 24, 2005, Global Survey data consisted of three low resolution scans. The Nadir standard product consists of four files, where each file is composed of the Global Survey Nadir observations from one of four focal planes for a single orbit, i.e. 72 orbit sequences. The Global Survey Nadir observations only used a single set of filter mix. A Limb sequence within the TES Global Survey involved three high-resolution scans over the same limb locations. The Limb standard product consisted of four files, where each file was composed of the Global Survey Limb observations from one of four focal planes for a single orbit, i.e. 72 orbit sequences. The Global Survey Limb observations used a repeating sequence of filter wheel positions. Special Observations could only be scheduled during the 9 or 10 orbit gaps in the Global Surveys, and were conducted in any of three basic modes: stare, transect, step-and-stare. The mode used depended on the science requirement. A Global Survey consisted of observations along 16 consecutive orbits at the start of a two day cycle, over which 4,608 retrievals were performed (1,152 nadir retrievals and 1,152 retrievals in time ordered sequence for each limb observation). Each observation was the input for retrievals of species Volume Mixing Ratios (VMR), temperature profiles, surface temperature, and other data parameters with associated pressure levels, precision, total error, vertical resolution, total column density, and other diagnostic quantities. Each TES Level 2 standard product reported information in a swath format conforming to the HDF-EOS Aura File Format Guidelines. Each Swath object was bounded by the number of observations in a global survey and a predefined set of pressure levels, representing slices through the atmosphere. Each standard product could have had a variable number of observations depending upon the Global Survey configuration and whether averaging was employed. Also, missing or bad retrievals were not reported. Each limb observation Limb 1, Limb 2 and Limb 3, were processed independently. Thus, each limb standard product consisted of three sets where each set consisted of 1,152 observations. For TES, the swath object represented one of these sets. Thus, each limb standard product consisted of three swath objects, one for each observation, Limb 1, Limb 2, and Limb 3. The organization of data within the Swath object was based on a superset of Upper Atmosphere Research Satellite (UARS) pressure levels used to report concentrations of trace atmospheric gases. The reporting grid was the same pressure grid used for modeling. There were 67 reporting levels from 1211.53 hPa, which allow
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This paper explores a unique dataset of all the SET ratings provided by students of one university in Poland at the end of the winter semester of the 2020/2021 academic year. The SET questionnaire used by this university is presented in Appendix 1. The dataset is unique for several reasons. It covers all SET surveys filled by students in all fields and levels of study offered by the university. In the period analysed, the university was entirely in the online regime amid the Covid-19 pandemic. While the expected learning outcomes formally have not been changed, the online mode of study could have affected the grading policy and could have implications for some of the studied SET biases. This Covid-19 effect is captured by econometric models and discussed in the paper. The average SET scores were matched with the characteristics of the teacher for degree, seniority, gender, and SET scores in the past six semesters; the course characteristics for time of day, day of the week, course type, course breadth, class duration, and class size; the attributes of the SET survey responses as the percentage of students providing SET feedback; and the grades of the course for the mean, standard deviation, and percentage failed. Data on course grades are also available for the previous six semesters. This rich dataset allows many of the biases reported in the literature to be tested for and new hypotheses to be formulated, as presented in the introduction section. The unit of observation or the single row in the data set is identified by three parameters: teacher unique id (j), course unique id (k) and the question number in the SET questionnaire (n ϵ {1, 2, 3, 4, 5, 6, 7, 8, 9} ). It means that for each pair (j,k), we have nine rows, one for each SET survey question, or sometimes less when students did not answer one of the SET questions at all. For example, the dependent variable SET_score_avg(j,k,n) for the triplet (j=Calculus, k=John Smith, n=2) is calculated as the average of all Likert-scale answers to question nr 2 in the SET survey distributed to all students that took the Calculus course taught by John Smith. The data set has 8,015 such observations or rows. The full list of variables or columns in the data set included in the analysis is presented in the attached filesection. Their description refers to the triplet (teacher id = j, course id = k, question number = n). When the last value of the triplet (n) is dropped, it means that the variable takes the same values for all n ϵ {1, 2, 3, 4, 5, 6, 7, 8, 9}.Two attachments:- word file with variables description- Rdata file with the data set (for R language).Appendix 1. Appendix 1. The SET questionnaire was used for this paper. Evaluation survey of the teaching staff of [university name] Please, complete the following evaluation form, which aims to assess the lecturer’s performance. Only one answer should be indicated for each question. The answers are coded in the following way: 5- I strongly agree; 4- I agree; 3- Neutral; 2- I don’t agree; 1- I strongly don’t agree. Questions 1 2 3 4 5 I learnt a lot during the course. ○ ○ ○ ○ ○ I think that the knowledge acquired during the course is very useful. ○ ○ ○ ○ ○ The professor used activities to make the class more engaging. ○ ○ ○ ○ ○ If it was possible, I would enroll for the course conducted by this lecturer again. ○ ○ ○ ○ ○ The classes started on time. ○ ○ ○ ○ ○ The lecturer always used time efficiently. ○ ○ ○ ○ ○ The lecturer delivered the class content in an understandable and efficient way. ○ ○ ○ ○ ○ The lecturer was available when we had doubts. ○ ○ ○ ○ ○ The lecturer treated all students equally regardless of their race, background and ethnicity. ○ ○
Facebook
TwitterTL2H2ON_8 is the Tropospheric Emission Spectrometer (TES)/Aura Level 2 Water Vapor Nadir Version 8 data product. TES was an instrument aboard NASA's Aura satellite and was launched from California on July 15, 2004. Data collection for TES is complete. TES Level 2 data contains retrieved species (or temperature) profiles at the observation targets and the estimated errors. The geolocation, quality, and other data (e.g., surface characteristics for nadir observations) were also provided. L2 modeled spectra were evaluated using radiative transfer modeling algorithms. The process, referred to as retrieval, compared observed spectra to the modeled spectra and iteratively updated the atmospheric parameters. L2 standard product files included information for one molecular species (or temperature) for an entire global survey or special observation run. A global survey consisted of a maximum of 16 consecutive orbits. A nadir sequence within the TES Global Survey was a fixed number of observations within an orbit for a Global Survey. Prior to April 24, 2005, it consisted of two low resolution scans over the same ground locations. After April 24, 2005, Global Survey data consisted of three low resolution scans. The Nadir standard product consists of four files, where each file is composed of the Global Survey Nadir observations from one of four focal planes for a single orbit, i.e. 72 orbit sequences. The Global Survey Nadir observations only used a single set of filter mix. A Global Survey consists of observations along 16 consecutive orbits at the start of a two day cycle, over which 3,200 retrievals were performed. Each observation was the input for retrievals of species volume mixing ratios (VMRs), temperature profiles, surface temperature and other data parameters with associated pressure levels, precision, total error, vertical resolution, total column density and other diagnostic quantities. Each TES Level 2 standard product reported information in a swath format conforming to the HDF-EOS Aura File Format Guidelines. Each Swath object wa bounded by the number of observations in a global survey and a predefined set of pressure levels representing slices through the atmosphere. Each standard product could have had a variable number of observations depending upon the Global Survey configuration and whether averaging is employed. Also, missing or bad retrievals were not reported. The organization of data within the Swath object was based on a superset of the Upper Atmosphere Research Satellite (UARS) pressure levels that was used to report concentrations of trace atmospheric gases. The reporting grid was the same pressure grid used for modeling. There were 67 reporting levels from 1211.53 hPa, which allowed for very high surface pressure conditions, to 0.1 hPa, about 65 km. In addition, the products reported values directly at the surface when possible or at the observed cloud top level. Thus in the Standard Product files each observation could potentially contain estimates for the concentration of a particular molecule at 67 different pressure levels within the atmosphere. However, for most retrieved profiles, the highest pressure levels were not observed due to a surface at lower pressure or cloud obscuration. For pressure levels corresponding to altitudes below the cloud top or surface, where measurements were not possible, a fill value was applied.To minimize the duplication of information between the individual species standard products, data fields common to each species (such as spacecraft coordinates, emissivity, and other data fields) have been collected into a separate standard product, termed the TES L2 Ancillary Data product (ESDT short name: TL2ANC). Users of this product should also obtain the Ancillary Data product.
Facebook
TwitterThese data are daily dust count observations taken in College-Fairbanks, Alaska from 23 March 1933 to 29 August 1933. The data are part of a larger collection titled "Second International Polar Year Records, 1931-1936, Department of Terrestrial Magnetism, Carnegie Institute of Washington." Within this larger collection, the data are identified as "Series 1: College-Fairbanks IPY Station Records and Data, 1932-1934: Subseries C: Auroral and Meterological Records and Data, 1932-1933: Dust Count Observations, March 1933 - August 1933."The data are provided in a PDF copy of the handwritten entries (Dust_Count_Observations_March1933_to_August1933.pdf). Two supporting files are also included in this data set. The first is a copy of the handwritten data transcribed to a Microsoft Excel spreadsheet (Dust_Count_Observations_March1933_to_August1933.xls). The second is a PDF document that explains the larger collection (DTM_Collection_Description.pdf).The entries were recorded using an Aitken Dust Counter. Each entry includes up to 10 counts per day with measurements of wind, clouds, and visibility. The handwritten copy has the most complete data, as some of the handwritten notes were not transcribed into the computer spreadsheet. For example, handwritten notes concerning problems with the counter itself were not transcribed into the computer spreadsheet.The data are available via FTP.NOAA@NSIDC believes these data to be of value but is unable to provide documentation. If you have information about this data set that others would find useful, please contact NSIDC User Services.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This raster dataset covers all of NSW and is a raw count of inundated pixel observations from all available Landsat acquisitions from mid 1984 to mid 2016. The dataset was produced by applying a …Show full descriptionThis raster dataset covers all of NSW and is a raw count of inundated pixel observations from all available Landsat acquisitions from mid 1984 to mid 2016. The dataset was produced by applying a water index to each Landsat scene using the technique developed by Fisher and Danaher (2016). Water indexed images were classified into inundated and not inundated classes using a threshold value of -10. Masking of cloud, cloud shadow and other erroneous pixels resulting from sensor anomalies was undertaken using the F-mask technique (Zhu and Woodcock 2012) and these pixels were allocated a 'no data' value . The classified images (with pixels allocated to 'inundated' or 'not inundated' or 'no data' classes) were then stacked and the number of inundated observations were counted for each pixel in available Landsat scenes. Known commission errors include areas of terrain shadow, building shadow especially in urban areas, and tall dense forest such as some pine plantations. Known omission errors include areas of greater vegetation cover. Potential users should note that inundated observations are only recorded for cloud free observation times and locations, thus inundation events on cloudy days may not have been detected.
Facebook
TwitterThis submission derives from Work Package 3 "Producers' behaviours, agrobiodiversity, and food diversity" of the H2020 project FoodLAND "Food and Local, Agricultural and Nutritional Diversity" (2020-2025). It consists of two datasets, and two survey questionnaires used to gather these data (English version). The datasets include information about small crop and fish farmers, respectively, sampled in so-called Food Hubs (i.e., local production regions) in Morocco, Kenya, Tanzania, Tunisia, and Uganda. The crop farmers' data were collected in two Food Hubs in each country, while the fish farmers' data were collected in one Food Hub in each of Kenya and Uganda, plus a limited number of observations in Tanzania, in all the cases using standardised survey questionnaires. In one location in each of Morocco, Kenya, Tanzania and Tunisia, lab-in-the-field experiments were run with crop farmers, while in one location in Uganda the same experiments were run with fish farmers. The experimental protocols have been submitted separately. The datasets are provided as Excel Workbooks, while the questionnaires are provided in PDF format. Each dataset includes one sheet about "Conditions" (one row per farmer: 4,529 observations for crop farmers and 927 for fish farmers), one sheet about "Production" (one row per farmer and per up to three crops: 10,668 observations for crop farmers and 1,245 for fish farmers), and one sheet with the results of the behavioural experiments (one row per farmer: 1,987 observations for crop farmers and 406 for fish farmers).
Facebook
TwitterTL2MTLN_7 is the Tropospheric Emission Spectrometer (TES)/Aura Level 2 Methanol Nadir Version 7 data product. TES was an instrument aboard NASA's Aura satellite and was launched from California on July 15, 2004. Data collection for TES is complete. It consisted of information for one molecular species for an entire Global Survey or Special Observation. TES Level 2 data contains retrieved species (or temperature) profiles at the observation targets and the estimated errors. The geolocation, quality, and other data (e.g., surface characteristics for nadir observations) were also provided. L2 modeled spectra were evaluated using radiative transfer modeling algorithms. The process, referred to as retrieval, compared observed spectra to the modeled spectra and iteratively updated the atmospheric parameters. L2 standard product files included information for one molecular species (or temperature) for an entire global survey or special observation run. A global survey consisted of a maximum of 16 consecutive orbits. Nadir and limb observations were in separate L2 files, and a single ancillary file was composed of data that were common to both nadir and limb files. A nadir sequence within the TES Global Survey was a fixed number of observations within an orbit for a Global Survey. Prior to April 24, 2005, it consisted of two low resolution scans over the same ground locations. After April 24, 2005, Global Survey data consisted of three low resolution scans. The Nadir standard product consisted of four files, where each file was composed of the Global Survey Nadir observations from one of four focal planes for a single orbit, i.e. 72 orbit sequences. The Global Survey Limb observations used a repeating sequence of filter wheel positions. Special Observations could only be scheduled during the 9 or 10 orbit gaps in the Global Surveys, and were conducted in any of three basic modes: stare, transect, step-and-stare. The mode used depended on the science requirement. A Global Survey consisted of observations along 16 consecutive orbits at the start of a two day cycle, over which 4,608 retrievals were performed (1,152 nadir retrievals and 1,152 retrievals in time ordered sequence for each limb observation). Each observation was the input for retrievals of species volume mixing ratios (VMRs), temperature profiles, surface temperature and other data parameters with associated pressure levels, precision, total error, vertical resolution, total column density and other diagnostic quantities. Each TES Level 2 standard product reported information in a swath format conforming to the HDF-EOS Aura File Format Guidelines. Each Swath object was bounded by the number of observations in a global survey and a predefined set of pressure levels representing slices through the atmosphere. Each standard product could have had a variable number of observations depending upon the Global Survey configuration and whether averaging is employed. Also, missing or bad retrievals were not reported. The organization of data within the Swath object was based on a superset of the Upper Atmosphere Research Satellite (UARS) pressure levels that was used to report concentrations of trace atmospheric gases. The reporting grid was the same pressure grid used for modeling. There were 67 reporting levels from 1211.53 hPa, which allowed for very high surface pressure conditions, to 0.1 hPa, about 65 km. In addition, the products reported values directly at the surface when possible or at the observed cloud top level. Thus in the Standard Product files each observation could potentially contain estimates for the concentration of a particular molecule at 67 different pressure levels within the atmosphere. However, for most retrieved profiles, the highest pressure levels were not observed due to a surface at lower pressure or cloud obscuration. For pressure levels corresponding to altitudes below the cloud top or surface, where measurements were not possible, a fill value was
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data repository for the publication entitled 'Observation of the distribution of nuclear magnetization in a molecule'.
The binned experimental data for each Scan are named 'data_scan_"Scan Number".csv' showing the resonant 225Ra+ rate its error as a function of spectroscopy laser wavenumber.
The ouput file for PGOPHER is named 225_RaF.pgo which was used to determine the molecular constants of 225RaF. This contains all of the aforementioned data loaded into the software and the resulting fit.
PGOPHER is a publicly available software package for simulating and fitting the structure of molecules and can be found at: https://pgopher.chm.bris.ac.uk/
Facebook
TwitterAttribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
This dataset was developed from real data on the usage of the corporate data network at the Universidade Federal do Rio Grande do Norte (UFRN). The main objective is to enable detailed observation of the university's network infrastructure and make this data available to the academic community. Data collection started on August 30, 2023, with the last query conducted on February 7, 2025, covering a total of approximately 19 months of continuous observations. During this period, about 1.5 months of data were lost due to failures in the data collection process or maintenance of the system responsible for capturing the data.
The data collections cover administrative, academic, and classroom sectors, spanning a total of 13 buildings within the university, providing a broad view of the network across different environments.
The dataset contains a total of 1,675,843 entries, each with 49 attributes.
The dataset contains approximately 1,675,843 entries, with 49 attributes per entry. It is available in CSV format.
Facebook
Twitterhttps://object-store.os-api.cci2.ecmwf.int:443/cci2-prod-catalogue/licences/insitu-gridded-observations-global-and-regional/insitu-gridded-observations-global-and-regional_15437b363f02bf5e6f41fc2995e3d19a590eb4daff5a7ce67d1ef6c269d81d68.pdfhttps://object-store.os-api.cci2.ecmwf.int:443/cci2-prod-catalogue/licences/insitu-gridded-observations-global-and-regional/insitu-gridded-observations-global-and-regional_15437b363f02bf5e6f41fc2995e3d19a590eb4daff5a7ce67d1ef6c269d81d68.pdf
This dataset provides high-resolution gridded temperature and precipitation observations from a selection of sources. Additionally the dataset contains daily global average near-surface temperature anomalies. All fields are defined on either daily or monthly frequency. The datasets are regularly updated to incorporate recent observations. The included data sources are commonly known as GISTEMP, Berkeley Earth, CPC and CPC-CONUS, CHIRPS, IMERG, CMORPH, GPCC and CRU, where the abbreviations are explained below. These data have been constructed from high-quality analyses of meteorological station series and rain gauges around the world, and as such provide a reliable source for the analysis of weather extremes and climate trends. The regular update cycle makes these data suitable for a rapid study of recently occurred phenomena or events. The NASA Goddard Institute for Space Studies temperature analysis dataset (GISTEMP-v4) combines station data of the Global Historical Climatology Network (GHCN) with the Extended Reconstructed Sea Surface Temperature (ERSST) to construct a global temperature change estimate. The Berkeley Earth Foundation dataset (BERKEARTH) merges temperature records from 16 archives into a single coherent dataset. The NOAA Climate Prediction Center datasets (CPC and CPC-CONUS) define a suite of unified precipitation products with consistent quantity and improved quality by combining all information sources available at CPC and by taking advantage of the optimal interpolation (OI) objective analysis technique. The Climate Hazards Group InfraRed Precipitation with Station dataset (CHIRPS-v2) incorporates 0.05° resolution satellite imagery and in-situ station data to create gridded rainfall time series over the African continent, suitable for trend analysis and seasonal drought monitoring. The Integrated Multi-satellitE Retrievals dataset (IMERG) by NASA uses an algorithm to intercalibrate, merge, and interpolate “all'' satellite microwave precipitation estimates, together with microwave-calibrated infrared (IR) satellite estimates, precipitation gauge analyses, and potentially other precipitation estimators over the entire globe at fine time and space scales for the Tropical Rainfall Measuring Mission (TRMM) and its successor, Global Precipitation Measurement (GPM) satellite-based precipitation products. The Climate Prediction Center morphing technique dataset (CMORPH) by NOAA has been created using precipitation estimates that have been derived from low orbiter satellite microwave observations exclusively. Then, geostationary IR data are used as a means to transport the microwave-derived precipitation features during periods when microwave data are not available at a location. The Global Precipitation Climatology Centre dataset (GPCC) is a centennial product of monthly global land-surface precipitation based on the ~80,000 stations world-wide that feature record durations of 10 years or longer. The data coverage per month varies from ~6,000 (before 1900) to more than 50,000 stations. The Climatic Research Unit dataset (CRU v4) features an improved interpolation process, which delivers full traceability back to station measurements. The station measurements of temperature and precipitation are public, as well as the gridded dataset and national averages for each country. Cross-validation was performed at a station level, and the results have been published as a guide to the accuracy of the interpolation. This catalogue entry complements the E-OBS record in many aspects, as it intends to provide high-resolution gridded meteorological observations at a global rather than continental scale. These data may be suitable as a baseline for model comparisons or extreme event analysis in the CMIP5 and CMIP6 dataset.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Context
The dataset presents median income data over a decade or more for males and females categorized by Total, Full-Time Year-Round (FT), and Part-Time (PT) employment in Onley. It showcases annual income, providing insights into gender-specific income distributions and the disparities between full-time and part-time work. The dataset can be utilized to gain insights into gender-based pay disparity trends and explore the variations in income for male and female individuals.
Key observations: Insights from 2023
Based on our analysis ACS 2019-2023 5-Year Estimates, we present the following observations: - All workers, aged 15 years and older: In Onley, while the Census reported a median income of $48,417 for all female workers aged 15 years and older, data for males in the same category was unavailable due to an insufficient number of sample observations.
Because income data for males was not available from the Census Bureau, conducting a comprehensive analysis of gender-based pay disparity in the town of Onley was not possible.
- Full-time workers, aged 15 years and older: In Onley, for full-time, year-round workers aged 15 years and older, the Census reported a median income of $49,745 for females, while data for males was unavailable due to an insufficient number of sample observations.As there was no available median income data for males, conducting a comprehensive assessment of gender-based pay disparity in Onley was not feasible.
When available, the data consists of estimates from the U.S. Census Bureau American Community Survey (ACS) 2019-2023 5-Year Estimates. All incomes have been adjusting for inflation and are presented in 2023-inflation-adjusted dollars.
Gender classifications include:
Employment type classifications include:
Variables / Data Columns
Good to know
Margin of Error
Data in the dataset are based on the estimates and are subject to sampling variability and thus a margin of error. Neilsberg Research recommends using caution when presening these estimates in your research.
Custom data
If you do need custom data for any of your research project, report or presentation, you can contact our research staff at research@neilsberg.com for a feasibility of a custom tabulation on a fee-for-service basis.
Neilsberg Research Team curates, analyze and publishes demographics and economic data from a variety of public and proprietary sources, each of which often includes multiple surveys and programs. The large majority of Neilsberg Research aggregated datasets and insights is made available for free download at https://www.neilsberg.com/research/.
This dataset is a part of the main dataset for Onley median household income by race. You can refer the same here
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Context
The dataset presents median income data over a decade or more for males and females categorized by Total, Full-Time Year-Round (FT), and Part-Time (PT) employment in North Johns. It showcases annual income, providing insights into gender-specific income distributions and the disparities between full-time and part-time work. The dataset can be utilized to gain insights into gender-based pay disparity trends and explore the variations in income for male and female individuals.
Key observations: Insights from 2023
Based on our analysis ACS 2019-2023 5-Year Estimates, we present the following observations: - All workers, aged 15 years and older: In North Johns, while the Census reported a median income of $46,063 for all male workers aged 15 years and older, data for females in the same category was unavailable due to an insufficient number of sample observations.
Given the absence of income data for females from the Census Bureau, conducting a thorough analysis of gender-based pay disparity in the town of North Johns was not possible.
- Full-time workers, aged 15 years and older: In North Johns, for full-time, year-round workers aged 15 years and older, while the Census reported a median income of $56,875 for males, while data for females was unavailable due to an insufficient number of sample observations.As there was no available median income data for females, conducting a comprehensive assessment of gender-based pay disparity in North Johns was not feasible.
When available, the data consists of estimates from the U.S. Census Bureau American Community Survey (ACS) 2019-2023 5-Year Estimates. All incomes have been adjusting for inflation and are presented in 2023-inflation-adjusted dollars.
Gender classifications include:
Employment type classifications include:
Variables / Data Columns
Good to know
Margin of Error
Data in the dataset are based on the estimates and are subject to sampling variability and thus a margin of error. Neilsberg Research recommends using caution when presening these estimates in your research.
Custom data
If you do need custom data for any of your research project, report or presentation, you can contact our research staff at research@neilsberg.com for a feasibility of a custom tabulation on a fee-for-service basis.
Neilsberg Research Team curates, analyze and publishes demographics and economic data from a variety of public and proprietary sources, each of which often includes multiple surveys and programs. The large majority of Neilsberg Research aggregated datasets and insights is made available for free download at https://www.neilsberg.com/research/.
This dataset is a part of the main dataset for North Johns median household income by race. You can refer the same here
Facebook
TwitterDISCOVERAQ_California_Pandora_Data contains all of the Pandora instrumentation data collected during the DISCOVER-AQ field study. Contained in this dataset are column measurements of NO2 and O3. Pandoras were situated at various ground sites across the study area, including Arvin-DiGiorgio, Bakersfield, Corcoran, Fresno, Hanford, Huron, Madera, Parlier, Porterville, Shafter, Tranquility and Visalia Airport. This data product contains only data from the California deployment and data collection is complete.Understanding the factors that contribute to near surface pollution is difficult using only satellite-based observations. The incorporation of surface-level measurements from aircraft and ground-based platforms provides the crucial information necessary to validate and expand upon the use of satellites in understanding near surface pollution. Deriving Information on Surface conditions from Column and Vertically Resolved Observations Relevant to Air Quality (DISCOVER-AQ) was a four-year campaign conducted in collaboration between NASA Langley Research Center, NASA Goddard Space Flight Center, NASA Ames Research Center, and multiple universities to improve the use of satellites to monitor air quality for public health and environmental benefit. Through targeted airborne and ground-based observations, DISCOVER-AQ enabled more effective use of current and future satellites to diagnose ground level conditions influencing air quality.DISCOVER-AQ employed two NASA aircraft, the P-3B and King Air, with the P-3B completing in-situ spiral profiling of the atmosphere (aerosol properties, meteorological variables, and trace gas species). The King Air conducted both passive and active remote sensing of the atmospheric column extending below the aircraft to the surface. Data from an existing network of surface air quality monitors, AERONET sun photometers, Pandora UV/vis spectrometers and model simulations were also collected. Further, DISCOVER-AQ employed many surface monitoring sites, with measurements being made on the ground, in conjunction with the aircraft. The B200 and P-3B conducted flights in Baltimore-Washington, D.C. in 2011, Houston, TX in 2013, San Joaquin Valley, CA in 2013, and Denver, CO in 2014. These regions were targeted due to being in violation of the National Ambient Air Quality Standards (NAAQS).The first objective of DISCOVER-AQ was to determine and investigate correlations between surface measurements and satellite column observations for the trace gases ozone (O3), nitrogen dioxide (NO2), and formaldehyde (CH2O) to understand how satellite column observations can diagnose surface conditions. DISCOVER-AQ also had the objective of using surface-level measurements to understand how satellites measure diurnal variability and to understand what factors control diurnal variability. Lastly, DISCOVER-AQ aimed to explore horizontal scales of variability, such as regions with steep gradients and urban plumes.
Facebook
TwitterThis dataset consists of an inventory of the locations of liquefaction-related phenomena triggered by the 7 January 2020 M6.4 Puerto Rico earthquake. The inventory is primarily based on field observations collected during post-earthquake reconnaissance conducted by the USGS and partners (Allstadt and others, 2020). Some additional locations were added based on reconnaissance reports by other groups (Miranda and others, 2020; Morales-Velez and others, 2020). We delineated 43 polygons of liquefaction areas and lateral spreading where we had sufficient evidence to do so (liquefaction_polygons_20210913.shp), but all outlines are approximate because liquefaction is primarily a subsurface process and surface expression may not extend over the entire area where liquefaction occurred at depth (and sometimes surface expression may not be present at all, though we are not able to map those cases). We used scientific judgment to group larger areas of fissuring, spreading, sand boils, and settlement into polygons that enclosed these clusters of features while maintaining relatively simple geometries. All other locations are point features; one file contains 32 points of occurrences (liquefaction_points_20210913.shp) and a separate file contains 81 null points, or non-occurrences (liquefaction_nullpoints_20210913.shp), which are locations in environments susceptible to liquefaction that we visited but did not observe any evidence that ground failure occurred. This inventory is not a complete mapping of all areas where liquefaction or lateral spreading occurred; it represents areas that were visited in the field and where evidence of liquefaction was visible at the surface. There were likely other liquefaction-related ground failure occurrences that are not mapped here. For the files that contain liquefaction occurrences, we defined the following attributes: “Type”: Polygons were defined as one of three types: The “liquefaction area” type indicates areas where liquefaction surface manifestations (fissures, sand ejecta, sand boils, settlement) were pervasive but there was no obvious lateral displacement. The “lateral spreading” type indicates locations where lateral displacements, often towards bodies of water and likely related to liquefaction, were observed. The “settlement” type indicates areas where vertical deformation was noted but lacked definitive evidence of liquefaction. The latter two types may or may not have sand ejecta. Points have a wider variety of types because they refer to specific individual occurrences of possible manifestations of liquefaction and lateral spreading (bridge abutment damage, sand boils, broken pipes, compressional features, raised manhole). “Ejecta”: This attribute has a value of 1 if sand ejecta were noted, 0 if it was not noted. The value is left as Null if the presence or absence of ejecta could not be determined with certainty. “Disp”: We did not make precise measurements of displacements, but we include this attribute to note whether there was little to no vertical or horizontal displacement. Displacement estimates were primarily based on horizontal displacements because we did not have reference points to measure vertical displacement except in a few cases where adjacent structures did not settle and provided a point of reference. The displacement categories are defined as follows: mild displacement (on the order of 1 cm), moderate displacement (on the order of 10 cm), or major displacements (on the order of a meter or larger). “Certainty”: This attribute represents the confidence that each feature was related to liquefaction or caused by the earthquake. This is categorized on a scale from one to three in which 1 indicates the feature was field checked, its location is well-known, and it was likely related to liquefaction, 2 indicates the feature was field checked but it is unclear if the feature is actually related to liquefaction (e.g., bridge abutment damage) or if it was actually caused by the earthquake (e.g., some crab burrows in the area resemble sand boils), and 3 indicates the feature was mapped using satellite imagery and was not field checked. “Comments”: This attribute includes additional relevant information. The null point dataset only has the “Comments” attribute and an id (“fieldobs_id”) that corresponds to the entry of the field observations dataset (Allstadt et al., 2020) that the point comes from. Null points were taken directly from that field observations dataset. However, the original dataset contained many null observations collected for different purposes so for this subset we have screened out only those null observations that are from locations where liquefaction susceptibility was estimated to be high but liquefaction did not occur. High susceptibility refers to areas with elevated probabilities on the USGS Ground Failure product liquefaction model map (https://earthquake.usgs.gov/data/ground-failure/background.php) and/or areas where the environment was conducive to liquefaction (strong shaking, close to bodies of water, soft soils). Landslides were mapped separately by Knoper and others, (2020). References: Allstadt, K.E., Thompson, E.M., Bayouth Garcia, D., Brugman, E.I., Hernandez, J.L., Schmitt, R.G., Hughes, S.K., Fuentes, Z., Martinez, S.N., Cerovski-Darriau, C., Perkins, J.P., Grant, A.R., and Slaughter, S.L., 2020, Field observations of ground failure triggered by the 2020 Puerto Rico earthquake sequence: U.S. Geological Survey data release. https://doi.org/10.5066/P96QNFMB. Knoper, L., Allstadt, K.E., Clark, M.K., Thompson, E.M., and Schmitt, R.G., 2020, Inventory of landslides triggered by the 2020 Puerto Rico earthquake sequence: U.S. Geological Survey data release. https://doi.org/10.5066/P9U0IXLP. Miranda, E., Archbold, J., Heresi, P., Messina, A., Rosa, I., Kijewski-Correa, T., Mosalam, K., Prevatt, D., Robertson, I., and Roueche, D., 2020, StEER - Puerto Rico Earthquake Sequence December 2019 to January 2020: Early Access Reconnaissance Report (EARR). https://doi.org/10.17603/ds2-h0kd-5677 Morales-Velez, A.C., Bernal, J., Hughes, K.S., Pando, M., Perez, J.C., Rodriguez, L.A., and Suarez, L.E., 2020, Geotechnical Reconnaissance of the January 7, 2020 M6.4 Southwest Puerto Rico Earthquake and Associated Seismic Sequence: Geotechnical Extreme Event Reconnaissance Association GEER-066, accessed July 28, 2020, at http://www.geerassociation.org/component/geer_reports/?view=geerreports&id=93&layout=build.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Context
The dataset presents the detailed breakdown of the count of individuals within distinct income brackets, categorizing them by gender (men and women) and employment type - full-time (FT) and part-time (PT), offering valuable insights into the diverse income landscapes within Hope town. The dataset can be utilized to gain insights into gender-based income distribution within the Hope town population, aiding in data analysis and decision-making..
Key observations
When available, the data consists of estimates from the U.S. Census Bureau American Community Survey (ACS) 2019-2023 5-Year Estimates.
Income brackets:
Variables / Data Columns
Employment type classifications include:
Good to know
Margin of Error
Data in the dataset are based on the estimates and are subject to sampling variability and thus a margin of error. Neilsberg Research recommends using caution when presening these estimates in your research.
Custom data
If you do need custom data for any of your research project, report or presentation, you can contact our research staff at research@neilsberg.com for a feasibility of a custom tabulation on a fee-for-service basis.
Neilsberg Research Team curates, analyze and publishes demographics and economic data from a variety of public and proprietary sources, each of which often includes multiple surveys and programs. The large majority of Neilsberg Research aggregated datasets and insights is made available for free download at https://www.neilsberg.com/research/.
This dataset is a part of the main dataset for Hope town median household income by race. You can refer the same here
Facebook
TwitterThis data set contains Calibrated data taken by the New Horizons Long Range Reconnaissance Imager instrument during the Pluto encounter mission phase. This is VERSION 3.0 of this data set. This data set contains LORRI observations taken during the the Approach (Jan-Jul, 2015), Encounter, Departure, and Transition mission sub-phases, including flyby observations taken on 14 July, 2015, and departure and calibration data through late October, 2016. Departure observations include a ring search of the Pluto system and 1994 JR1 observations. This data set completes the Pluto mission phase deliveries for LORRI. This is version 3.0 of this dataset. Changes since version 2.0 include the addition of data downlinked between the end of January, 2016 and the end of October, 2016, completing the delivery of all data covering the Pluto Encounter and subsequent Calibration Campaign. It includes multi- map observations from the Approach phase, observations of the moons, hi- res, full-frame observations from Pluto Encounter and Departure, sliver maps, and ring search observations. There may be some overlap between prior datasets and this dataset, due to only partial, windowed, or lossy data in prior datasets. Observations at closest approach to Pluto are marked with _CA in the Request ID. This dataset also includes functional tests from the Calibration Campaign, including a regular observation of NGC3532. Finally it includes the first set of distant KBO observations. Also, updates were made to the calibration files, documentation, and catalog files. There were minor changes to the level 2 LORRI calibration process, as well as to the LORRI calibration constants for the final Pluto P3 PDS delivery. The process change involves gap removal during calibration. Files with gaps come in many flavors, depending on where the gap lies within the image. This update recognizes some additional possiblities, mainly that the gap might be close to the bottom or top of the image (and therefore the previous algorithm would fail because it filled the gap with median pixel info from both above and below the gap). The new algorithm will take the info from one side of the gap exclusively, when appropriate.
Facebook
TwitterTL2CH4LN_7 is the Tropospheric Emission Spectrometer (TES)/Aura Level 2 Methane Lite Nadir Version 7 data product. TES was an instrument aboard NASA's Aura satellite and was launched from California on July 15, 2004. Data collection for TES is complete. TES Level 2 data contain retrieved species (or temperature) profiles at the observation targets and the estimated errors. The geolocation, quality, and other data (e.g., surface characteristics for nadir observations) were also provided. L2 modeled spectra were evaluated using radiative transfer modeling algorithms. The process, referred to as retrieval, compared observed spectra to the modeled spectra and iteratively updated the atmospheric parameters. L2 standard product files included information for one molecular species (or temperature) for an entire global survey or special observation run. A global survey consisted of a maximum of 16 consecutive orbits. Nadir and limb observations were in separate L2 files, and a single ancillary file was composed of data that was common to both nadir and limb files.Nadir observations, which point directly to the surface of the Earth, are different from limb observations, which are pointed at various off-nadir angles into the atmosphere. Nadir and limb observations were added to separate L2 files, and a single ancillary file was composed of data that are common to both nadir and limb files. A Nadir sequence within the TES Global Survey was a fixed number of observations within an orbit for a Global Survey. Prior to April 24, 2005, it consisted of two low resolution scans over the same ground locations. After April 24, 2005, Global Survey data consisted of three low resolution scans. The Nadir standard product consists of four files, where each file is composed of the Global Survey Nadir observations from one of four focal planes for a single orbit, i.e. 72 orbit sequences. The Global Survey Nadir observations only used a single set of filter mix. A Global Survey consisted of observations along 16 consecutive orbits at the start of a two day cycle, over which 4,608 retrievals were performed. Each observation was the input for retrievals of species Volume Mixing Ratios (VMRs), temperature profiles, surface temperature, and other data parameters with associated pressure levels, precision, total error, vertical resolution, total column density, and other diagnostic quantities. Each TES Level 2 standard product reported information in a swath format conforming to the HDF-EOS Aura File Format Guidelines. Each Swath object was bounded by the number of observations in a global survey and a predefined set of pressure levels, representing slices through the atmosphere. Each standard product could have had a variable number of observations depending upon the Global Survey configuration and whether averaging was employed. Also, missing or bad retrievals were not reported. Further, observations were occasionally scheduled on non-global survey days. In general they were measurements made for validation purposes or with highly focused science objectives. Those non-global survey measurements were referred to as “special observations.”A Limb sequence within the TES Global Survey was three high-resolution scans over the same limb locations. The Limb standard product consists of four files, where each file is composed of the Global Survey Limb observations from one of four focal planes for a single orbit, i.e. 72 orbit sequences. The Global Survey Limb observations used a repeating sequence of filter wheel positions. Special Observations could only be scheduled during the 9 or 10 orbit gaps in the Global Surveys, and were conducted in any of three basic modes: stare, transect, step-and-stare. The mode used depended on the science requirement. Each limb observation Limb 1, Limb 2 and Limb 3, were processed independently. Thus, each limb standard product consisted of three sets where each set consisted of 1,152 observations. For TES, the swath object represented one of these sets. Thus, each limb standard product consisted of three swath objects, one for each observation, Limb 1, Limb 2, and Limb 3. The organization of data within the Swath object was based on a superset of the Upper Atmosphere Research Satellite (UARS) pressure levels used to report concentrations of trace atmospheric gases. The reporting grid was the same pressure grid used for modeling. There were 67 reporting levels from 1211.53 hPa, which allowed for very high surface pressure conditions, to 0.1 hPa, about 65 km. In addition, the products reported values directly at the surface when possible or at the observed cloud top level. Thus in the Standard Product files, each observation could potentially contain estimates for the concentration of a particular molecule at 67 different pressure levels within the atmosphere. However, for most retrieved profiles, the highest pressure levels were not observed due to a surface at lower pressure or cloud obscuration. For pressure levels corresponding to altitudes below the cloud top or surface, where measurements were not possible, a fill value was applied.To minimize the duplication of information between the individual species standard products, data fields common to each species (such as spacecraft coordinates, emissivity, and other data fields) was collected into a separate standard product, termed the TES L2 Ancillary Data product (Short name: TL2ANC). Users of this product should also obtain the Ancillary Data product.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Context
The dataset presents median income data over a decade or more for males and females categorized by Total, Full-Time Year-Round (FT), and Part-Time (PT) employment in Lake Arthur. It showcases annual income, providing insights into gender-specific income distributions and the disparities between full-time and part-time work. The dataset can be utilized to gain insights into gender-based pay disparity trends and explore the variations in income for male and female individuals.
Key observations: Insights from 2023
Based on our analysis ACS 2019-2023 5-Year Estimates, we present the following observations: - All workers, aged 15 years and older: In Lake Arthur, while the Census reported a median income of $26,125 for all female workers aged 15 years and older, data for males in the same category was unavailable due to an insufficient number of sample observations.
Because income data for males was not available from the Census Bureau, conducting a comprehensive analysis of gender-based pay disparity in the town of Lake Arthur was not possible.
- Full-time workers, aged 15 years and older: In Lake Arthur, for full-time, year-round workers aged 15 years and older, while the Census reported a median income of $62,813 for males, while data for females was unavailable due to an insufficient number of sample observations.As there was no available median income data for females, conducting a comprehensive assessment of gender-based pay disparity in Lake Arthur was not feasible.
When available, the data consists of estimates from the U.S. Census Bureau American Community Survey (ACS) 2019-2023 5-Year Estimates. All incomes have been adjusting for inflation and are presented in 2023-inflation-adjusted dollars.
Gender classifications include:
Employment type classifications include:
Variables / Data Columns
Good to know
Margin of Error
Data in the dataset are based on the estimates and are subject to sampling variability and thus a margin of error. Neilsberg Research recommends using caution when presening these estimates in your research.
Custom data
If you do need custom data for any of your research project, report or presentation, you can contact our research staff at research@neilsberg.com for a feasibility of a custom tabulation on a fee-for-service basis.
Neilsberg Research Team curates, analyze and publishes demographics and economic data from a variety of public and proprietary sources, each of which often includes multiple surveys and programs. The large majority of Neilsberg Research aggregated datasets and insights is made available for free download at https://www.neilsberg.com/research/.
This dataset is a part of the main dataset for Lake Arthur median household income by race. You can refer the same here
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Context
The dataset presents the detailed breakdown of the count of individuals within distinct income brackets, categorizing them by gender (men and women) and employment type - full-time (FT) and part-time (PT), offering valuable insights into the diverse income landscapes within Hope town. The dataset can be utilized to gain insights into gender-based income distribution within the Hope town population, aiding in data analysis and decision-making..
Key observations
When available, the data consists of estimates from the U.S. Census Bureau American Community Survey (ACS) 2019-2023 5-Year Estimates.
Income brackets:
Variables / Data Columns
Employment type classifications include:
Good to know
Margin of Error
Data in the dataset are based on the estimates and are subject to sampling variability and thus a margin of error. Neilsberg Research recommends using caution when presening these estimates in your research.
Custom data
If you do need custom data for any of your research project, report or presentation, you can contact our research staff at research@neilsberg.com for a feasibility of a custom tabulation on a fee-for-service basis.
Neilsberg Research Team curates, analyze and publishes demographics and economic data from a variety of public and proprietary sources, each of which often includes multiple surveys and programs. The large majority of Neilsberg Research aggregated datasets and insights is made available for free download at https://www.neilsberg.com/research/.
This dataset is a part of the main dataset for Hope town median household income by race. You can refer the same here
Facebook
TwitterTL2CO2NS_7 is the Tropospheric Emission Spectrometer (TES)/Aura Level 2 Carbon Dioxide Nadir Special Observation Version 7 data product. TES Level 2 data contain retrieved species (or temperature) profiles at the observation targets and the estimated errors. The geolocation, quality, and other data (e.g., surface characteristics for nadir observations) are also provided. L2 modeled spectra are evaluated using radiative transfer modeling algorithms. The process, referred to as retrieval, compares observed spectra to the modeled spectra and iteratively updates the atmospheric parameters. L2 standard product files include information for one molecular species (or temperature) for an entire global survey or special observation run. A global survey consists of a maximum of 16 consecutive orbits. Nadir and limb observations are in separate L2 files, and a single ancillary file is composed of data that are common to both nadir and limb files. A nadir sequence within the TES Global Survey is a fixed number of observations within an orbit for a Global Survey. Prior to April 24, 2005, it consisted of two low resolution scans over the same ground locations. After April 24, 2005, Global Survey data consisted of three low resolution scans. The Nadir standard product consists of four files, where each file is composed of the Global Survey Nadir observations from one of four focal planes for a single orbit, i.e. 72 orbit sequences. The Global Survey Nadir observations currently only use a single set of filter mix. A Global Survey consists of observations along 16 consecutive orbits at the start of a two day cycle, over which 3,200 retrievals are performed. Each observation is the input for retrievals of species Volume Mixing Ratios (VMR), temperature profiles, surface temperature and other data parameters with associated pressure levels, precision, total error, vertical resolution, total column density and other diagnostic quantities. Each TES Level 2 standard product reports information in a swath format conforming to the HDF-EOS Aura File Format Guidelines. Each Swath object is bounded by the number of observations in a global survey and a predefined set of pressure levels representing slices through the atmosphere. Each standard product can have a variable number of observations depending upon the Global Survey configuration and whether averaging is employed. Also, missing or bad retrievals are not reported. The organization of data within the Swath object is based on a superset of the UARS pressure levels used to report concentrations of trace atmospheric gases. The reporting grid is the same pressure grid used for modeling. There are 67 reporting levels from 1211.53 hPa, which allows for very high surface pressure conditions, to 0.1 hPa, about 65 km. In addition, the products will report values directly at the surface when possible or at the observed cloud top level. Thus, in the Standard Product files each observation can potentially contain estimates for the concentration of a particular molecule at 67 different pressure levels within the atmosphere. However, for most retrieved profiles, the highest pressure levels are not observed due to a surface at lower pressure or cloud obscuration. For pressure levels corresponding to altitudes below the cloud top or surface, where measurements were not possible, a fill value will be applied. To minimize the duplication of information between the individual species standard products, data fields common to each species (such as spacecraft coordinates, emissivity, and other data fields) have been collected into a separate standard product, termed the TES L2 Ancillary Data product (ESDT short name: TL2ANC). Users of this product should also obtain the Ancillary Data product.