Facebook
TwitterThis dataset includes New York State historical shoreline positions represented as digital vector polylines from 1880 to 2015. Shorelines were compiled from topographic survey sheets from the National Oceanic and Atmospheric Administration (NOAA). Historical shoreline positions can be used to assess the movement of shorelines through time. Rates of shoreline change were calculated in ArcMap 10.5.1 using the Digital Shoreline Analysis System (DSAS) version 5.0. DSAS uses a measurement baseline method to calculate rate of change statistics. Transects are cast from the reference baseline to intersect each shoreline, establishing measurement points used to calculate shoreline change rates. For wetland shorelines these rates can be interpreted as accretion or erosion.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
By [source]
This dataset collects job offers from web scraping which are filtered according to specific keywords, locations and times. This data gives users rich and precise search capabilities to uncover the best working solution for them. With the information collected, users can explore options that match with their personal situation, skillset and preferences in terms of location and schedule. The columns provide detailed information around job titles, employer names, locations, time frames as well as other necessary parameters so you can make a smart choice for your next career opportunity
For more datasets, click here.
- 🚨 Your notebook can be here! 🚨!
This dataset is a great resource for those looking to find an optimal work solution based on keywords, location and time parameters. With this information, users can quickly and easily search through job offers that best fit their needs. Here are some tips on how to use this dataset to its fullest potential:
Start by identifying what type of job offer you want to find. The keyword column will help you narrow down your search by allowing you to search for job postings that contain the word or phrase you are looking for.
Next, consider where the job is located – the Location column tells you where in the world each posting is from so make sure it’s somewhere that suits your needs!
Finally, consider when the position is available – look at the Time frame column which gives an indication of when each posting was made as well as if it’s a full-time/ part-time role or even if it’s a casual/temporary position from day one so make sure it meets your requirements first before applying!
Additionally, if details such as hours per week or further schedule information are important criteria then there is also info provided under Horari and Temps Oferta columns too! Now that all three criteria have been ticked off - key words, location and time frame - then take a look at Empresa (Company Name) and Nom_Oferta (Post Name) columns too in order to get an idea of who will be employing you should you land the gig!
All these pieces of data put together should give any motivated individual all they need in order to seek out an optimal work solution - keep hunting good luck!
- Machine learning can be used to groups job offers in order to facilitate the identification of similarities and differences between them. This could allow users to specifically target their search for a work solution.
- The data can be used to compare job offerings across different areas or types of jobs, enabling users to make better informed decisions in terms of their career options and goals.
- It may also provide an insight into the local job market, enabling companies and employers to identify where there is potential for new opportunities or possible trends that simply may have previously gone unnoticed
If you use this dataset in your research, please credit the original authors. Data Source
License: CC0 1.0 Universal (CC0 1.0) - Public Domain Dedication No Copyright - You can copy, modify, distribute and perform the work, even for commercial purposes, all without asking permission. See Other Information.
File: web_scraping_information_offers.csv | Column name | Description | |:-----------------|:------------------------------------| | Nom_Oferta | Name of the job offer. (String) | | Empresa | Company offering the job. (String) | | Ubicació | Location of the job offer. (String) | | Temps_Oferta | Time of the job offer. (String) | | Horari | Schedule of the job offer. (String) |
If you use this dataset in your research, please credit the original authors. If you use this dataset in your research, please credit .
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
V2 is out!!! V2
Simple "Reflection" method dataset inspired by mattshumer
This is the prompt and response version. Find ShareGPT version here
This dataset was synthetically generated using Glaive AI.
Facebook
TwitterDISCOVERAQ_Colorado_Ground_BAOTower_Data contains data collected at the BAO Tower ground site during the Colorado (Denver) deployment of NASA's DISCOVER-AQ field study. This data product contains data for only the Colorado deployment and data collection is complete.Understanding the factors that contribute to near surface pollution is difficult using only satellite-based observations. The incorporation of surface-level measurements from aircraft and ground-based platforms provides the crucial information necessary to validate and expand upon the use of satellites in understanding near surface pollution. Deriving Information on Surface conditions from Column and Vertically Resolved Observations Relevant to Air Quality (DISCOVER-AQ) was a four-year campaign conducted in collaboration between NASA Langley Research Center, NASA Goddard Space Flight Center, NASA Ames Research Center, and multiple universities to improve the use of satellites to monitor air quality for public health and environmental benefit. Through targeted airborne and ground-based observations, DISCOVER-AQ enabled more effective use of current and future satellites to diagnose ground level conditions influencing air quality.DISCOVER-AQ employed two NASA aircraft, the P-3B and King Air, with the P-3B completing in-situ spiral profiling of the atmosphere (aerosol properties, meteorological variables, and trace gas species). The King Air conducted both passive and active remote sensing of the atmospheric column extending below the aircraft to the surface. Data from an existing network of surface air quality monitors, AERONET sun photometers, Pandora UV/vis spectrometers and model simulations were also collected. Further, DISCOVER-AQ employed many surface monitoring sites, with measurements being made on the ground, in conjunction with the aircraft. The B200 and P-3B conducted flights in Baltimore-Washington, D.C. in 2011, Houston, TX in 2013, San Joaquin Valley, CA in 2013, and Denver, CO in 2014. These regions were targeted due to being in violation of the National Ambient Air Quality Standards (NAAQS).The first objective of DISCOVER-AQ was to determine and investigate correlations between surface measurements and satellite column observations for the trace gases ozone (O3), nitrogen dioxide (NO2), and formaldehyde (CH2O) to understand how satellite column observations can diagnose surface conditions. DISCOVER-AQ also had the objective of using surface-level measurements to understand how satellites measure diurnal variability and to understand what factors control diurnal variability. Lastly, DISCOVER-AQ aimed to explore horizontal scales of variability, such as regions with steep gradients and urban plumes.
Facebook
Twitterhttp://www.gnu.org/licenses/lgpl-3.0.htmlhttp://www.gnu.org/licenses/lgpl-3.0.html
The Traveling Salesperson Problem (TSP) is a class problem of computer science that seeks to find the shortest route between a group of cities. It is an NP-hard problem in combinatorial optimization, important in theoretical computer science and operations research.
https://data.heatonresearch.com/images/wustl/kaggle/tsp/world-tsp.png" alt="World Map">
In this Kaggle competition, your goal is not to find the shortest route among cities. Rather, you must attempt to determine the route labeled on a map.
The data for this competition is not made up of real-world maps, but rather randomly generated maps of varying attributes of size, city count, and optimality of the routes. The following image demonstrates a relatively small map, with few cities, and an optimal route.
https://data.heatonresearch.com/images/wustl/kaggle/tsp/1.jpg" alt="Small Map">
Not all maps are this small, or contain this optimal a route. Consider the following map, which is much larger.
https://data.heatonresearch.com/images/wustl/kaggle/tsp/6.jpg" alt="Larger Map">
The following attributes were randomly selected to generate each image.
The path distance is based on the sum of the Euclidean distance of all segments in the path. The distance units are in pixels.
This is a regression problem, you are to estimate the total path length. Several challenges to consider.
The following picture shows a section from one map zoomed to the pixel-level:
https://data.heatonresearch.com/images/wustl/kaggle/tsp/tsp_zoom.jpg" alt="TSP Zoom">
The following CSV files are provided, in addition to the images.
The tsp-all.csv file contains the following data.
id,filename,distance,key
0,0.jpg,83110,503x673-270-83110.jpg
1,1.jpg,1035,906x222-10-1035.jpg
2,2.jpg,20756,810x999-299-20756.jpg
3,3.jpg,13286,781x717-272-13286.jpg
4,4.jpg,13924,609x884-312-13924.jpg
The columns:
Facebook
TwitterDISCOVERAQ_Texas_Ground_Analysis_Ancillary_Data contains data collected at ancillary ground sites during the Texas (Houston) deployment of NASA's DISCOVER-AQ field study. This data product contains data for only the Texas deployment and data collection is complete.Understanding the factors that contribute to near surface pollution is difficult using only satellite-based observations. The incorporation of surface-level measurements from aircraft and ground-based platforms provides the crucial information necessary to validate and expand upon the use of satellites in understanding near surface pollution. Deriving Information on Surface conditions from Column and Vertically Resolved Observations Relevant to Air Quality (DISCOVER-AQ) was a four-year campaign conducted in collaboration between NASA Langley Research Center, NASA Goddard Space Flight Center, NASA Ames Research Center, and multiple universities to improve the use of satellites to monitor air quality for public health and environmental benefit. Through targeted airborne and ground-based observations, DISCOVER-AQ enabled more effective use of current and future satellites to diagnose ground level conditions influencing air quality.DISCOVER-AQ employed two NASA aircraft, the P3-B and King Air, with the P-3B completing in-situ spiral profiling of the atmosphere (aerosol properties, meteorological variables, and trace gas species). The King Air conducted both passive and active remote sensing of the atmospheric column extending below the aircraft to the surface. Data from an existing network of surface air quality monitors, AERONET sun photometers, Pandora UV/vis spectrometers and model simulations were also collected. Further, DISCOVER-AQ employed many surface monitoring sites, with measurements being made on the ground, in conjunction with the aircraft. The B200 and P-3B conducted flights in Baltimore-Washington, D.C. in 2011, Houston, TX in 2013, San Joaquin Valley, CA in 2013, and Denver, CO in 2014. These regions were targeted due to being in violation of the National Ambient Air Quality Standards (NAAQS).The first objective of DISCOVER-AQ was to determine and investigate correlations between surface measurements and satellite column observations for the trace gases ozone (O3), nitrogen dioxide (NO2), and formaldehyde (CH2O) to understand how satellite column observations can diagnose surface conditions. DISCOVER-AQ also had the objective of using surface-level measurements to understand how satellites measure diurnal variability and to understand what factors control diurnal variability. Lastly, DISCOVER-AQ aimed to explore horizontal scales of variability, such as regions with steep gradients and urban plumes.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Context
The dataset tabulates the Tuscaloosa population over the last 20 plus years. It lists the population for each year, along with the year on year change in population, as well as the change in percentage terms for each year. The dataset can be utilized to understand the population change of Tuscaloosa across the last two decades. For example, using this dataset, we can identify if the population is declining or increasing. If there is a change, when the population peaked, or if it is still growing and has not reached its peak. We can also compare the trend with the overall trend of United States population over the same period of time.
Key observations
In 2022, the population of Tuscaloosa was 110,602, a 1.39% increase year-by-year from 2021. Previously, in 2021, Tuscaloosa population was 109,082, an increase of 4.67% compared to a population of 104,214 in 2020. Over the last 20 plus years, between 2000 and 2022, population of Tuscaloosa increased by 31,687. In this period, the peak population was 110,602 in the year 2022. The numbers suggest that the population has not reached its peak yet and is showing a trend of further growth. Source: U.S. Census Bureau Population Estimates Program (PEP).
When available, the data consists of estimates from the U.S. Census Bureau Population Estimates Program (PEP).
Data Coverage:
Variables / Data Columns
Good to know
Margin of Error
Data in the dataset are based on the estimates and are subject to sampling variability and thus a margin of error. Neilsberg Research recommends using caution when presening these estimates in your research.
Custom data
If you do need custom data for any of your research project, report or presentation, you can contact our research staff at research@neilsberg.com for a feasibility of a custom tabulation on a fee-for-service basis.
Neilsberg Research Team curates, analyze and publishes demographics and economic data from a variety of public and proprietary sources, each of which often includes multiple surveys and programs. The large majority of Neilsberg Research aggregated datasets and insights is made available for free download at https://www.neilsberg.com/research/.
This dataset is a part of the main dataset for Tuscaloosa Population by Year. You can refer the same here
Facebook
TwitterThe Massachusetts Office of Coastal Zone Management launched the Shoreline Change Project in 1989 to identify erosion-prone areas of the coast. The shoreline position and change rate are used to inform management decisions regarding the erosion of coastal resources. In 2001, a shoreline from 1994 was added to calculate both long- and short-term shoreline change rates along ocean-facing sections of the Massachusetts coast. In 2013, two oceanfront shorelines for Massachusetts were added using 2008-9 color aerial orthoimagery and 2007 topographic lidar datasets obtained from the National Oceanic and Atmospheric Administration's Ocean Service, Coastal Services Center. This 2018 data release includes rates that incorporate two new mean high water (MHW) shorelines for the Massachusetts coast extracted from lidar data collected between 2010 and 2014. The first new shoreline for the State includes data from 2010 along the North Shore and South Coast from lidar data collected by the U.S. Army Corps of Engineers (USACE) Joint Airborne Lidar Bathymetry Technical Center of Expertise. Shorelines along the South Shore and Outer Cape are from 2011 lidar data collected by the U.S. Geological Survey's (USGS) National Geospatial Program Office. Shorelines along Nantucket and Martha’s Vineyard are from a 2012 USACE Post Sandy Topographic lidar survey. The second new shoreline for the North Shore, Boston, South Shore, Cape Cod Bay, Outer Cape, South Cape, Nantucket, Martha’s Vineyard, and the South Coast (around Buzzards Bay to the Rhode Island Border) is from 2013-14 lidar data collected by the (USGS) Coastal and Marine Geology Program. This 2018 update of the rate of shoreline change in Massachusetts includes two types of rates. Some of the rates include a proxy-datum bias correction, this is indicated in the filename with “PDB”. The rates that do not account for this correction have “NB” in their file names. The proxy-datum bias is applied because in some areas a proxy shoreline (like a High Water Line shoreline) has a bias when compared to a datum shoreline (like a Mean High Water shoreline). In areas where it exists, this bias should be accounted for when calculating rates using a mix of proxy and datum shorelines. This issue is explained further in Ruggiero and List (2009) and in the process steps of the metadata associated with the rates. This release includes both long-term (~150 years) and short term (~30 years) rates. Files associated with the long-term rates have “LT” in their names, files associated with short-term rates have “ST” in their names.
Facebook
TwitterSandy ocean beaches are a popular recreational destination, often surrounded by communities containing valuable real estate. Development is on the rise despite the fact that coastal infrastructure is subjected to flooding and erosion. As a result, there is an increased demand for accurate information regarding past and present shoreline changes. To meet these national needs, the Coastal and Marine Geology Program of the U.S. Geological Survey (USGS) is compiling existing reliable historical shoreline data along open-ocean sandy shores of the conterminous United States and parts of Alaska and Hawaii under the National Assessment of Shoreline Change project. There is no widely accepted standard for analyzing shoreline change. Existing shoreline data measurements and rate calculation methods vary from study to study and prevent combining results into state-wide or regional assessments. The impetus behind the National Assessment project was to develop a standardized method of measuring changes in shoreline position that is consistent from coast to coast. The goal was to facilitate the process of periodically and systematically updating the results in an internally consistent manner.
Facebook
TwitterDue to continued coastal population growth and increased threats of erosion, current data on trends and rates of shoreline movement are required to inform shoreline and floodplain management. The Massachusetts Office of Coastal Zone Management launched the Shoreline Change Project in 1989 to identify erosion-prone areas of the coast. In 2001, a 1994 shoreline was added to calculate both long- and short-term shoreline change rates at 40-meter intervals along ocean-facing sections of the Massachusetts coast. The Coastal and Marine Geology Program of the U.S. Geological Survey (USGS) in cooperation with the Massachusetts Office of Coastal Zone Management, has compiled reliable historical shoreline data along open-facing sections of the Massachusetts coast under the Massachusetts Shoreline Change Mapping and Analysis Project 2013 Update. Two oceanfront shorelines for Massachusetts (approximately 1,800 km) were (1) delineated using 2008/09 color aerial orthoimagery, and (2) extracted from topographic LIDAR datasets (2007) obtained from NOAA's Ocean Service, Coastal Services Center. The new shorelines were integrated with existing Massachusetts Office of Coastal Zone Management (MA CZM) and USGS historical shoreline data in order to compute long- and short-term rates using the latest version of the Digital Shoreline Analysis System (DSAS).
Facebook
TwitterTable of neighbourhoods used to calculate the stakes of numbers of inhabitants and jobs for each flood scenario.
A series of spatial data produced by the GIS High Flood Risk Land Flood Directive (TRI) of the French Metropolis and mapped for reporting purposes for the European Flood Directive. European Directive 2007/60/EC of 23 October 2007 on the assessment and management of flood risks (OJ L 288, 06-11-2007, p. 27) influences the flood prevention strategy in Europe. It requires the production of flood risk management plans to reduce the negative consequences of flooding on human health, the environment, cultural heritage and economic activity. The objectives and implementation requirements are set out in the Law of 12 July 2010 on the National Commitment for the Environment (LENE) and the Decree of 2 March 2011. In this context, the primary objective of flood and flood risk mapping for IRRs is to contribute, by homogenising and objectivating knowledge of flood exposure, to the development of flood risk management plans (WRMs). This dataset is used to produce flood surface maps and flood risk maps that represent flood hazards and issues at an appropriate scale, respectively. Their objective is to provide quantitative evidence to further assess the vulnerability of a territory for the three levels of probability of flooding (high, medium, low).
Facebook
Twitterhttps://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Reconstruction maps of cryo-electron microscopy (cryo-EM) exhibit distortion when the cryo-EM dataset is incomplete, usually caused by unevenly distributed orientations. Prior efforts had been attempted to address this preferred orientation problem using tilt-collection strategy, modifications to grids or to air-water-interfaces. However, these approaches often require time-consuming experiments and the effect was always protein dependent. Here, we developed a procedure containing removing mis-aligned particles and an iterative reconstruction method based on signal-to-noise ratio of Fourier component to correct such distortion by recovering missing data using a purely computational algorithm. This procedure called Signal-to-Noise Ratio Iterative Reconstruction Method (SIRM) was applied on incomplete datasets of various proteins to fix distortion in cryo-EM maps and to a more isotropic resolution. In addition, SIRM provides a better reference map for further reconstruction refinements, resulting in an improved alignment, which ultimately improves map quality and benefits model building.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This data set contains the simulations and data analysis files used in the publication: "Mapping magnetic signals of individual magnetite grains to their internal magnetic configurations using micromagnetic models", by D. Cortés-Ortuño, K. Fabian and L. V. de Groot. The data set includes: Scripts and output files from MERRILL simulations Jupyter notebooks with data analysis Figures A preprint of this work can be found in: David Cortés-Ortuño, Karl Fabian and Lennart V. de Groot. Mapping magnetic signals of individual magnetite grains to their internal magnetic configurations using micromagnetic models. DOI: 10.1002/essoar.10510574.1. Earth and Space Science Open Archive. https://doi.org/10.1002/essoar.10510574.1 The README file in this dataset (in markdown format) contains full details about the simulations. The dataset also contains pre-computed data files to calculate the inversions and produce the figures and analyze the inversion data without processing the vbox files. To cite this dataset you can use the following bibtex entry: @Misc{Cortes2022, author = {Cortés-Ortuño, David and Fabian, Karl and de Groot, Lennart V.}, title = {{Data set for: Mapping magnetic signals of individual magnetite grains to their internal magnetic configurations using micromagnetic models}}, publisher = {Zenodo}, year = {2022}, doi = {10.5281/zenodo.6501818}, url = {https://doi.org/10.5281/zenodo.6501818}, }
Facebook
TwitterDISCOVERAQ_Maryland_Ground_Essex_Data contains data collected at the Essex ground site during the Maryland (Baltimore-Washington) deployment of NASA's DISCOVER-AQ field study. This data product contains data for only the Maryland deployment and data collection is complete.Understanding the factors that contribute to near surface pollution is difficult using only satellite-based observations. The incorporation of surface-level measurements from aircraft and ground-based platforms provides the crucial information necessary to validate and expand upon the use of satellites in understanding near surface pollution. Deriving Information on Surface conditions from Column and Vertically Resolved Observations Relevant to Air Quality (DISCOVER-AQ) was a four-year campaign conducted in collaboration between NASA Langley Research Center, NASA Goddard Space Flight Center, NASA Ames Research Center, and multiple universities to improve the use of satellites to monitor air quality for public health and environmental benefit. Through targeted airborne and ground-based observations, DISCOVER-AQ enabled more effective use of current and future satellites to diagnose ground level conditions influencing air quality.DISCOVER-AQ employed two NASA aircraft, the P-3B and King Air, with the P-3B completing in-situ spiral profiling of the atmosphere (aerosol properties, meteorological variables, and trace gas species). The King Air conducted both passive and active remote sensing of the atmospheric column extending below the aircraft to the surface. Data from an existing network of surface air quality monitors, AERONET sun photometers, Pandora UV/vis spectrometers and model simulations were also collected. Further, DISCOVER-AQ employed many surface monitoring sites, with measurements being made on the ground, in conjunction with the aircraft. The B200 and P-3B conducted flights in Baltimore-Washington, D.C. in 2011, Houston, TX in 2013, San Joaquin Valley, CA in 2013, and Denver, CO in 2014. These regions were targeted due to being in violation of the National Ambient Air Quality Standards (NAAQS).The first objective of DISCOVER-AQ was to determine and investigate correlations between surface measurements and satellite column observations for the trace gases ozone (O3), nitrogen dioxide (NO2), and formaldehyde (CH2O) to understand how satellite column observations can diagnose surface conditions. DISCOVER-AQ also had the objective of using surface-level measurements to understand how satellites measure diurnal variability and to understand what factors control diurnal variability. Lastly, DISCOVER-AQ aimed to explore horizontal scales of variability, such as regions with steep gradients and urban plumes.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset has 21 columns that carry the features (questions) of 988 respondents. The efficiency of any machine learning model is heavily dependent on its raw initial dataset. For this, we had to be extra careful in gathering our information. We figured out that for our particular problem, we had to go forward with data that was not only authentic but also versatile enough to get the proper information from relevant sources. Hence we opted to build our dataset by dispatching a survey questionnaire among targeted audiences. Firstly, we built the questionnaire with inquiries that were made after keen observation. Studying the behavior from our intended audience, we came up with factual and informative queries that generated appropriate data. Our prime audience were those who were highly into buying fashion accessories and hence we had created a set of questionnaires that emphasized on questions related to that field. We had a total of twenty one well revised questions that gave us an overview of all answers that were going to be needed within the proximity of our system. As such, we had the opportunity to gather over half a thousand authentic leads and concluded upon our initial raw dataset accordingly.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This open-access dataset, provides a detailed time-motion study of construction work, specifically focusing on MEP (Mechanical, Electrical, and Plumbing) activities. The dataset is intended to facilitate research and analysis to improve operational efficiency and safety within the construction industry. It includes anonymized and pseudonymized data, ensuring privacy while still offering valuable insights into worker activities.
Contents: (1)Time-motion study dataset: Captures categorized work activities by MEP workers at a second-to-second level. (2) Description of work activities: Provides detailed classifications of the tasks performed, allowing for in-depth analysis.
This dataset has been made publicly available under the CC-BY-SA license, encouraging reuse and redistribution with proper attribution and share-alike terms. By downloading the dataset, users acknowledge and agree to comply with the terms outlined above.
Funding and Support: This work has been supported by the “Hukka LVI- ja sähkötöissä” (Waste in Plumbing and Electrical Work) project, funded by STUL (Electrical Contractor Association), LVI-TU (HVAC Contractor Association), and STTA (Electrical Employers Union) from Finland.
This comprehensive dataset offers valuable resources for research and analysis purposes. For further information or collaboration inquiries, feel free to reach out to discuss data collection methods and potential research partnerships: olli.seppanen@aalto.fi & christopher.gorsch@vtt.fi.
Facebook
TwitterDISCOVERAQ_California_Ozonesondes_Data contains data collected via ozonesonde lauches at the Porterville ground site during the California (San Joaquin Valley) deployment of NASA's DISCOVER-AQ field study. This data product contains data for only the California deployment and data collection is complete.Understanding the factors that contribute to near surface pollution is difficult using only satellite-based observations. The incorporation of surface-level measurements from aircraft and ground-based platforms provides the crucial information necessary to validate and expand upon the use of satellites in understanding near surface pollution. Deriving Information on Surface conditions from Column and Vertically Resolved Observations Relevant to Air Quality (DISCOVER-AQ) was a four-year campaign conducted in collaboration between NASA Langley Research Center, NASA Goddard Space Flight Center, NASA Ames Research Center, and multiple universities to improve the use of satellites to monitor air quality for public health and environmental benefit. Through targeted airborne and ground-based observations, DISCOVER-AQ enabled more effective use of current and future satellites to diagnose ground level conditions influencing air quality.DISCOVER-AQ employed two NASA aircraft, the P-3B and King Air, with the P-3B completing in-situ spiral profiling of the atmosphere (aerosol properties, meteorological variables, and trace gas species). The King Air conducted both passive and active remote sensing of the atmospheric column extending below the aircraft to the surface. Data from an existing network of surface air quality monitors, AERONET sun photometers, Pandora UV/vis spectrometers and model simulations were also collected. Further, DISCOVER-AQ employed many surface monitoring sites, with measurements being made on the ground, in conjunction with the aircraft. The B200 and P-3B conducted flights in Baltimore-Washington, D.C. in 2011, Houston, TX in 2013, San Joaquin Valley, CA in 2013, and Denver, CO in 2014. These regions were targeted due to being in violation of the National Ambient Air Quality Standards (NAAQS).The first objective of DISCOVER-AQ was to determine and investigate correlations between surface measurements and satellite column observations for the trace gases ozone (O3), nitrogen dioxide (NO2), and formaldehyde (CH2O) to understand how satellite column observations can diagnose surface conditions. DISCOVER-AQ also had the objective of using surface-level measurements to understand how satellites measure diurnal variability and to understand what factors control diurnal variability. Lastly, DISCOVER-AQ aimed to explore horizontal scales of variability, such as regions with steep gradients and urban plumes.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Table of municipalities used to calculate the stakes of numbers of inhabitants and jobs for each flood scenario. A series of spatial data produced by the Paris High Flood Risk Area Flood Directive (TRI) and mapped for reporting purposes for the European Flood Directive. European Directive 2007/60/EC of 23 October 2007 on the assessment and management of flood risks (OJ L 288, 06-11-2007, p. 27) influences the flood prevention strategy in Europe. It requires the production of flood risk management plans to reduce the negative consequences of flooding on human health, the environment, cultural heritage and economic activity. The objectives and implementation requirements are set out in the Law of 12 July 2010 on the National Commitment for the Environment (LENE) and the Decree of 2 March 2011. In this context, the primary objective of flood and flood risk mapping for IRRs is to contribute, by homogenising and objectivating knowledge of flood exposure, to the development of flood risk management plans (WRMs). This dataset is used to produce flood surface maps and flood risk maps that represent flood hazards and issues at an appropriate scale, respectively. Their objective is to provide quantitative evidence to further assess the vulnerability of a territory for the three levels of probability of flooding (high, medium, low).
Facebook
TwitterDISCOVERAQ_California_Pandora_Data contains all of the Pandora instrumentation data collected during the DISCOVER-AQ field study. Contained in this dataset are column measurements of NO2 and O3. Pandoras were situated at various ground sites across the study area, including Arvin-DiGiorgio, Bakersfield, Corcoran, Fresno, Hanford, Huron, Madera, Parlier, Porterville, Shafter, Tranquility and Visalia Airport. This data product contains only data from the California deployment and data collection is complete.Understanding the factors that contribute to near surface pollution is difficult using only satellite-based observations. The incorporation of surface-level measurements from aircraft and ground-based platforms provides the crucial information necessary to validate and expand upon the use of satellites in understanding near surface pollution. Deriving Information on Surface conditions from Column and Vertically Resolved Observations Relevant to Air Quality (DISCOVER-AQ) was a four-year campaign conducted in collaboration between NASA Langley Research Center, NASA Goddard Space Flight Center, NASA Ames Research Center, and multiple universities to improve the use of satellites to monitor air quality for public health and environmental benefit. Through targeted airborne and ground-based observations, DISCOVER-AQ enabled more effective use of current and future satellites to diagnose ground level conditions influencing air quality.DISCOVER-AQ employed two NASA aircraft, the P-3B and King Air, with the P-3B completing in-situ spiral profiling of the atmosphere (aerosol properties, meteorological variables, and trace gas species). The King Air conducted both passive and active remote sensing of the atmospheric column extending below the aircraft to the surface. Data from an existing network of surface air quality monitors, AERONET sun photometers, Pandora UV/vis spectrometers and model simulations were also collected. Further, DISCOVER-AQ employed many surface monitoring sites, with measurements being made on the ground, in conjunction with the aircraft. The B200 and P-3B conducted flights in Baltimore-Washington, D.C. in 2011, Houston, TX in 2013, San Joaquin Valley, CA in 2013, and Denver, CO in 2014. These regions were targeted due to being in violation of the National Ambient Air Quality Standards (NAAQS).The first objective of DISCOVER-AQ was to determine and investigate correlations between surface measurements and satellite column observations for the trace gases ozone (O3), nitrogen dioxide (NO2), and formaldehyde (CH2O) to understand how satellite column observations can diagnose surface conditions. DISCOVER-AQ also had the objective of using surface-level measurements to understand how satellites measure diurnal variability and to understand what factors control diurnal variability. Lastly, DISCOVER-AQ aimed to explore horizontal scales of variability, such as regions with steep gradients and urban plumes.
Facebook
TwitterThe Massachusetts Office of Coastal Zone Management launched the Shoreline Change Project in 1989 to identify erosion-prone areas of the coast by compiling a database of historical (mid 1800's-1989) shoreline positions. Trends of shoreline position over long and short-term timescales provide information to landowners, managers, and potential buyers about possible future impacts to coastal resources and infrastructure. In 2001, a 1994 shoreline was added to calculate both long- and short-term shoreline change rates along ocean-facing sections of the Massachusetts coast. In 2013, two oceanfront shorelines for Massachusetts were added using 2008-2009 color aerial orthoimagery and 2007 topographic lidar datasets obtained from the National Oceanic and Atmospheric Administration's Ocean Service (NOAA), Coastal Services Center. In 2018, two new mean high water (MHW) shorelines for the Massachusetts coast extracted from lidar data between 2010-2014 were added to the dataset. This 2021 data release includes rates that incorporate one new shoreline from lidar data extracted in 2018 by the U.S. Army Corps of Engineers (USACE) Joint Airborne Lidar Bathymetry Technical Center of Expertise (JALBTCX), added to the existing database of all historical shorelines (1844-2014), for the North Shore, South Shore, Cape Cod Bay, Outer Cape, Buzzard’s Bay, South Cape, Nantucket, and Martha’s Vineyard. 2018 lidar data did not cover the Boston or Elizabeth Islands regions. Included in this data release is a proxy-datum bias reference line that accounts for the positional difference in a proxy shoreline (the High Water Line shoreline) and a datum shoreline (the Mean High Water shoreline). This issue is explained further in Ruggiero and List (2009) and in the process steps of the metadata associated with the rates. This release includes both long-term (~150+ years) and short term (~30 years) rates. Files associated with the long-term rates have “LT” in their names, files associated with short-term rates have "ST” in their names.
Facebook
TwitterThis dataset includes New York State historical shoreline positions represented as digital vector polylines from 1880 to 2015. Shorelines were compiled from topographic survey sheets from the National Oceanic and Atmospheric Administration (NOAA). Historical shoreline positions can be used to assess the movement of shorelines through time. Rates of shoreline change were calculated in ArcMap 10.5.1 using the Digital Shoreline Analysis System (DSAS) version 5.0. DSAS uses a measurement baseline method to calculate rate of change statistics. Transects are cast from the reference baseline to intersect each shoreline, establishing measurement points used to calculate shoreline change rates. For wetland shorelines these rates can be interpreted as accretion or erosion.