Performance Measure Definition: Average Call Processing Interval
Performance Measure Definition: Stroke Alert Call-to-Door Interval
Performance Measure Definition: STEMI Alert Call-to-Door Interval
Performance Measure Definition: Trauma Alert Scene Interval
Performance Measure Definition: Trauma Alert Call-to-Door Interval
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Descriptive statistics of the dataset with mean, standard deviation (SD), median, and the lower (quantile 5%) and upper (quantile 95%) boundary of the 90% confidence interval.
This data release supports the analysis of the recurrence interval of post-fire debris-flow generating rainfall in the southwestern United States. We define the recurrence interval of the peak 15-, 30-, and 60-minute rainfall intensities for 316 observations of post-fire debris-flow occurrence in 18 burn areas, 5 U.S. states, and 7 climate types (as defined by Beck, H. E., Zimmermann, N. E., McVicar, T. R., Vergopolan, N., Berg, A., & Wood, E. F. (2018). Present and future Köppen-Geiger climate classification maps at 1-km resolution. Scientific Data, 5(1), 180214. doi:10.1038/sdata.2018.214).
This digital dataset defines the well locations, perforated intervals, and time series of hydraulic-head observations used in the calibration of the transient hydrologic model of the Central Valley flow system. The Central Valley encompasses an approximate 50,000 square-kilometer region of California. The complex hydrologic system of the Central Valley is simulated using the U.S. Geological Survey (USGS) numerical modeling code MODFLOW-FMP (Schmid and others, 2006b). This simulation is referred to here as the Central Valley Hydrologic Model (CVHM) (Faunt, 2009). Utilizing MODFLOW-FMP, the CVHM simulates groundwater and surface-water flow, irrigated agriculture, land subsidence, and other key processes in the Central Valley on a monthly basis from 1961-2003. The USGS and CA-DWR maintain databases of key wells in the Central Valley that are web-accessible (http://waterdata.usgs.gov and http://www.water.ca.gov/waterdatalibrary/, respectively). These data were combined to form a database of available water levels throughout the Central Valley from 1961 to 2003. More than 850,000 water-level altitude measurements from more than 21,400 wells have been compiled by the USGS or CA-DWR and have been entered into their respective databases. However, only a small portion of these wells have both sufficient construction information to determine the well-perforation interval and water-level measurements for the simulation period. For model calibration, water-level altitude data were needed that were (1) distributed spatially (both geographically and vertically) throughout the Central Valley; (2) distributed temporally throughout the simulation period (years 1961-2003); and (3) available during both wet and dry climatic regimes. From the available wells records, a subset of comparison wells was selected on the basis of perforation depths, completeness of record, climatic intervals, and locations throughout the Central Valley. Water-level altitude observations (19,725) for 206 wells were used as calibration targets during parameter estimation. The CVHM is the most recent regional-scale model of the Central Valley developed by the U.S. Geological Survey (USGS). The CVHM was developed as part of the USGS Groundwater Resources Program (see "Foreword", Chapter A, page iii, for details).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
PIADE dataset contains data from five industrial packaging machines:
## Raw Data
Each row represents a production interval, with the following schema:
There are 133 different types of alerts, and 429394 rows.
## Sequences (1h) data
For each piece of equipment, we define sequences of length = 1 hour and we aggregate raw interval data as follows:
The City of Toronto's Transportation Services Division collects short-term traffic count data across the City on an ad-hoc basis to support a variety of safety initiatives and projects. The data available in this repository are a full collection of Speed, Volume and Classification Counts conducted across the City since 1993. The two most common types of short-term traffic counts are Turning Movement Counts and Speed / Volume / Classification Counts. Turning Movement Count data, comprised of motor vehicle, bicycle and pedestrian movements through intersections, can be found here. Speed / Volume / Classification Counts are collected using pneumatic rubber tubes installed across the roadway. This dataset is a critical input into transportation safety initiatives, infrastructure design and program design such as speed limit changes, signal coordination studies, traffic calming and complete street designs. Each Speed / Volume / Classification Count is comprised of motor vehicle count data collected over a continuous 24-hour to 168-hour period (1-7 days), at a single location. A handful of non-standard 2-week counts are also included. Some key notes about these counts include: Not all counts have complete speed and classification data. These data are provided for locations and dates only where they exist. Raw data are recorded in 15-minute intervals. Raw data are recorded separately for each direction of traffic movement. Some data are only available for one direction, even if the street is two-way. Within each 15 minute interval, speed data are aggregated into approximately 5 km/h increments. Within each 15 minute interval, classification data are aggregated into vehicle type bins by the number of axles, according to the FWHA classification system attached below. The following files showing different views of the data are available: Data Dictionary (svc_data_dictionary.xlsx): Provides a detailed definition of every data field in all files. Summary Data (svc_summary_data): Provides metadata about every Speed / Volume / Classification Count available, including information about the count location and count date, as well as summary data about each count (total vehicle volumes, average daily volumes, a.m. and p.m. peak hour volumes, average / 85 percentile / 95 percentile speeds, where available, and heavy vehicle percentage, where available). Most Recent Count Data (svc_most_recent_summary_data): Provides metadata about the most recent Speed / Volume / Classification Count data available at each location for which a count exists, including information about the count location and count date, as well as the summary data provided in the “Summary Data” file (see above). Raw Data: Raw data is available in 15-minute intervals, and is distributed into one of three different file types based on the count type: volume-only, speed and volume, or classification and volume. If you’re looking for 15-minute data for a specific count, identify the count type and count date, then download the raw data file associated with the count type and period. If you’re looking for volume data for all count types, you will need to download and aggregate all three file types for a given period. Volume Raw Data (svc_raw_data_volume_yyyy_yyyy): These files—grouped by 5-10 year interval—provide volume data in 15-minute intervals, for each direction separately. You will find the raw data for volume-only counts (ATR_VOLUME) here. Speed and Volume Raw Data (svc_raw_data_speed_yyyy_yyyy): These files—grouped by 5-10 year interval—provide volume data aggregated into speed bins in approximately 5 km/h increments. Speed data are not available for all counts. You will find the raw data for speed and volume counts (ATR_SPEED_VOLUME) here. Classification and Volume Raw Data (svc_raw_data_classification_yyyy_yyyy): These files—grouped by 5-10 year interval—provide volume data aggregated into vehicle type bins by the number of axles, according to the FWHA classification system. Classification data are not available for all counts. You will find the raw data for classification and volume counts (VEHICLE_CLASS) here. FWHA Classification Reference (fwha_classification.png): Provides a reference for the FWHA classification system. This dataset references the City of Toronto's Street Centreline dataset, Intersection File dataset and Street Traffic Signal dataset.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data provided by the Marine Institute, and may also incorporate data from other agencies and bodies. This dataset shows the distribution of fishing effort by fishing vessels according to the gear type used. Fishing effort is defined as the time spent engaged in fishing operations or time spent at sea, this time may be multiplied by a measure of fishing capacity, e.g. engine power. In this dataset fishing effort is measured as average hours spent actively fishing per kilometre square, per year. Data from years 2014 to 2018 was used to produce this data product for the Marine Institute publication the “Atlas of Commercial Fisheries around Ireland, third edition“ (https://oar.marine.ie/handle/10793/1432). Effort for offshore fisheries is based on the following 2 primary data types - data on vessel positioning and data on gear types used: Vessel Monitoring Systems (VMS) supplied by the Irish Naval Service provide geographical position and speed of vessel at intervals of two hours or less (Commission Regulation (EC) No. 2244/2003). The data are available for all EU vessels of 12m and larger, operating inside the Irish EEZ; outside this zone only Irish VMS data are routinely available. VMS do not record whether a vessel is fishing, steaming or inactive. Logbooks collected by the Sea-Fisheries Protection Authority and supplied by the Department of Agriculture, Food & the Marine were the primary data source for information on landings and gear types used by Irish vessels. EU Fleet Register obtained from the EU fleet register provides information for non-Irish vessels and for Irish vessels for which the gear was not known from the logbooks. Note that if vessels use more than one gear, it is possible that the gear type assigned to them was not the one that was actually used. The fishing gear data was classified into eight main groups: demersal otter trawls; beam trawls; demersal seines; gill and trammel nets; longlines; dredges; pots and pelagic trawls. The VMS data was analysed using the approach described by Gerritsen and Lordan (IJMS 68(1)). This approach assigns effort to each of the VMS data points. The effort of a VMS data point is defined as the time interval since the previous data point. Next the data are filtered for fishing activity using speed criteria, vessels were assumed to be actively fishing if their speed fell within a certain range (depending on the fishing gear used). The points that remain are then aggregated into a spatial grid to produce a raster dataset showing fishing effort (in hours) per kilometre square per year for each gear type group. The data is available for all countries combined and for Irish vessels only.
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
https://vision.eng.au.dk/wp-content/uploads/2020/07/example_obs-1024x206-1024x206.jpg" alt="">
The CloudCast dataset contains 70080 cloud-labeled satellite images with 10 different cloud types corresponding to multiple layers of the atmosphere. The raw satellite images come from a satellite constellation in geostationary orbit centred at zero degrees longitude and arrive in 15-minute intervals from the European Organisationfor Meteorological Satellites (EUMETSAT). The resolution of these images is 3712 x 3712 pixels for the full-disk of Earth, which implies that every pixel corresponds to a space of dimensions 3x3km. This is the highest possible resolution from European geostationary satellites when including infrared channels. Some pre- and post-processing of the raw satellite images are also being done by EUMETSAT before being exposed to the public, such as removing airplanes. We collect all the raw multispectral satellite images and annotate them individually on a pixel-level using a segmentation algorithm. The full dataset then has a spatial resolution of 928 x 1530 pixels recorded with 15-min intervals for the period 2017-2018, where each pixel represents an area of 3×3 km. To enable standardized datasets for benchmarking computer vision methods, this includes a full-resolution gray-scaled dataset centered and projected dataset over Europe (128×128).
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
If you use this dataset in your research or elsewhere, please cite/reference the following paper: CloudCast: A Satellite-Based Dataset and Baseline for Forecasting Clouds
There are 24 folders in the dataset containing the following information:
| File | Definition | Note | | --- | --- | | X.npy | Numpy encoded array containing the actual 128x128 image with pixel values as labels, see below. | | | GEO.npz| Numpy array containing geo coordinates where the image was taken (latitude and longitude). | | | TIMESTAMPS.npy| Numpy array containing timestamps for each captured image. | Images are captured in 15-minute intervals. |
0 = No clouds or missing data 1 = Very low clouds 2 = Low clouds 3 = Mid-level clouds 4 = High opaque clouds 5 = Very high opaque clouds 6 = Fractional clouds 7 = High semitransparant thin clouds 8 = High semitransparant moderately thick clouds 9 = High semitransparant thick clouds 10 = High semitransparant above low or medium clouds
https://i.ibb.co/NFv55QW/cloudcast4.png" alt="">
https://i.ibb.co/3FhHzMT/cloudcast3.png" alt="">
https://i.ibb.co/9wCsJhR/cloudcast2.png" alt="">
https://i.ibb.co/9T5dbSH/cloudcast1.png" alt="">
This digital dataset defines the well locations, perforated intervals, and time series of hydraulic-head observations used in the calibration of the transient hydrologic model of the Central Valley flow system. The Central Valley encompasses an approximate 50,000 square-kilometer region of California. The complex hydrologic system of the Central Valley is simulated using the U.S. Geological Survey (USGS) numerical modeling code MODFLOW-FMP (Schmid and others, 2006b). This simulation is referred to here as the Central Valley Hydrologic Model (CVHM) (Faunt, 2009). Utilizing MODFLOW-FMP, the CVHM simulates groundwater and surface-water flow, irrigated agriculture, land subsidence, and other key processes in the Central Valley on a monthly basis from 1961-2003. The USGS and CA-DWR maintain databases of key wells in the Central Valley that are web-accessible (http://waterdata.usgs.gov and http://www.water.ca.gov/waterdatalibrary/, respectively). These data were combined to form a database of available water levels throughout the Central Valley from 1961 to 2003. More than 850,000 water-level altitude measurements from more than 21,400 wells have been compiled by the USGS or CA-DWR and have been entered into their respective databases. However, only a small portion of these wells have both sufficient construction information to determine the well-perforation interval and water-level measurements for the simulation period. For model calibration, water-level altitude data were needed that were (1) distributed spatially (both geographically and vertically) throughout the Central Valley; (2) distributed temporally throughout the simulation period (years 1961-2003); and (3) available during both wet and dry climatic regimes. From the available wells records, a subset of comparison wells was selected on the basis of perforation depths, completeness of record, climatic intervals, and locations throughout the Central Valley. Water-level altitude observations (19,725) for 206 wells were used as calibration targets during parameter estimation. The CVHM is the most recent regional-scale model of the Central Valley developed by the U.S. Geological Survey (USGS). The CVHM was developed as part of the USGS Groundwater Resources Program (see "Foreword", Chapter A, page iii, for details).
Table from the American Community Survey (ACS) B19301 and B19313 per capita and aggregate income. These are multiple, nonoverlapping vintages of the 5-year ACS estimates of population and housing attributes starting in 2010 shown by the corresponding census tract vintage. Also includes the most recent release annually.King County, Washington census tracts with nonoverlapping vintages of the 5-year American Community Survey (ACS) estimates starting in 2010. Vintage identified in the "ACS Vintage" field.The census tract boundaries match the vintage of the ACS data (currently 2010 and 2020) so please note the geographic changes between the decades. Tracts have been coded as being within the City of Seattle as well as assigned to neighborhood groups called "Community Reporting Areas". These areas were created after the 2000 census to provide geographically consistent neighborhoods through time for reporting U.S. Census Bureau data. This is not an attempt to identify neighborhood boundaries as defined by neighborhoods themselves.Vintages: 2010, 2015, 2020, 2021, 2022, 2023ACS Table(s): B19301 and B19313Data downloaded from: Census Bureau's Explore Census Data The United States Census Bureau's American Community Survey (ACS):About the SurveyGeography & ACSTechnical DocumentationNews & UpdatesThis ready-to-use layer can be used within ArcGIS Pro, ArcGIS Online, its configurable apps, dashboards, Story Maps, custom apps, and mobile apps. Data can also be exported for offline workflows. Please cite the Census and ACS when using this data.Data Note from the Census:Data are based on a sample and are subject to sampling variability. The degree of uncertainty for an estimate arising from sampling variability is represented through the use of a margin of error. The value shown here is the 90 percent margin of error. The margin of error can be interpreted as providing a 90 percent probability that the interval defined by the estimate minus the margin of error and the estimate plus the margin of error (the lower and upper confidence bounds) contains the true value. In addition to sampling variability, the ACS estimates are subject to nonsampling error (for a discussion of nonsampling variability, see Accuracy of the Data). The effect of nonsampling error is not represented in these tables.Data Processing Notes:Boundaries come from the US Census TIGER geodatabases, specifically, the National Sub-State Geography Database (named tlgdb_(year)_a_us_substategeo.gdb). Boundaries are updated at the same time as the data updates (annually), and the boundary vintage appropriately matches the data vintage as specified by the Census. These are Census boundaries with water and/or coastlines erased for cartographic and mapping purposes. For census tracts, the water cutouts are derived from a subset of the 2020 Areal Hydrography boundaries offered by TIGER. Water bodies and rivers which are 50 million square meters or larger (mid to large sized water bodies) are erased from the tract level boundaries, as well as additional important features. For state and county boundaries, the water and coastlines are derived from the coastlines of the 2020 500k TIGER Cartographic Boundary Shapefiles. These are erased to more accurately portray the coastlines and Great Lakes. The original AWATER and ALAND fields are still available as attributes within the data table (units are square meters). The States layer contains 52 records - all US states, Washington D.C., and Puerto RicoCensus tracts with no population that occur in areas of water, such as oceans, are removed from this data service (Census Tracts beginning with 99).Percentages and derived counts, and associated margins of error, are calculated values (that can be identified by the "_calc_" stub in the field name), and abide by the specifications defined by the American Community Survey.Field alias names were created based on the Table Shells file available from the American Community Survey Summary File Documentation page.Negative values (e.g., -4444...) have been set to null, with the exception of -5555... which has been set to zero. These negative values exist in the raw API data to indicate the following situations:The margin of error column indicates that either no sample observations or too few sample observations were available to compute a standard error and thus the margin of error. A statistical test is not appropriate.Either no sample observations or too few sample observations were available to compute an estimate, or a ratio of medians cannot be calculated because one or both of the median estimates falls in the lowest interval or upper interval of an open-ended distribution.The median falls in the lowest interval of an open-ended distribution, or in the upper interval of an open-ended distribution. A statistical test is not appropriate.The estimate is controlled. A statistical test for sampling variability is not appropriate.The data for this geographic area cannot be displayed because the number of sample cases is too small.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset shows the distribution of fishing effort by fishing vessels according to the gear type used. Fishing effort is defined as the time spent engaged in fishing operations or time spent at sea, this time may be multiplied by a measure of fishing capacity, e.g. engine power. In this dataset fishing effort is measured as average hours spent actively fishing per kilometre square, per year. Data from years 2014 to 2018 was used to produce this data product for the Marine Institute publication the “Atlas of Commercial Fisheries around Ireland, third edition“ (https://oar.marine.ie/handle/10793/1432). Effort for offshore fisheries is based on the following 2 primary data types - data on vessel positioning and data on gear types used: Vessel Monitoring Systems (VMS) supplied by the Irish Naval Service provide geographical position and speed of vessel at intervals of two hours or less (Commission Regulation (EC) No. 2244/2003). The data are available for all EU vessels of 12m and larger, operating inside the Irish EEZ; outside this zone only Irish VMS data are routinely available. VMS do not record whether a vessel is fishing, steaming or inactive. Logbooks collected by the Sea-Fisheries Protection Authority and supplied by the Department of Agriculture, Food and the Marine were the primary data source for information on landings and gear types used by Irish vessels. EU Fleet Register obtained from the EU fleet register (http://ec.europa.eu/fisheries/fleet/index.cfm) provides information for non-Irish vessels and for Irish vessels for which the gear was not known from the logbooks. Note that if vessels use more than one gear, it is possible that the gear type assigned to them was not the one that was actually used. The fishing gear data was classified into eight main groups: demersal otter trawls; beam trawls; demersal seines; gill and trammel nets; longlines; dredges; pots and pelagic trawls. The VMS data was analysed using the approach described by Gerritsen and Lordan (IJMS 68(1)). This approach assigns effort to each of the VMS data points. The effort of a VMS data point is defined as the time interval since the previous data point. Next the data are filtered for fishing activity using speed criteria, vessels were assumed to be actively fishing if their speed fell within a certain range (depending on the fishing gear used). The points that remain are then aggregated into a spatial grid to produce a raster dataset showing fishing effort (in hours) per kilometre square per year for each gear type group. The data is available for all countries combined and for Irish vessels only. None
The City of Toronto's Transportation Services Division collects short-term traffic count data across the City on an ad-hoc basis to support a variety of safety initiatives and projects. The data available in this repository are a full collection of Turning Movement Counts (TMC) conducted across the City since 1984. The two most common types of short-term traffic counts are Turning Movement Counts and Speed / Volume / Classification Counts. Speed / Volume / Classification Count data, comprised of vehicle speeds and volumes broken down by vehicle type, can be found here. Turning Movement Counts include the movements of motor vehicles, bicycles, and pedestrians through intersections. Counts are captured using video technology. Older counts were conducted manually by field staff. The City of Toronto uses this data to inform signal timing and infrastructure design. Each Turning Movement Count is comprised of data collected over 8 non-continuous hours (before September 2023) or over a continuous 14-hour period (September 2023 and after), at a single location. Some key notes about these counts include: Motor vehicle volumes are available for movements through the intersection (left-turn, right-turn and through-movement for each leg of the intersection). Motor vehicle volumes are further broken down by vehicle type (car, truck, bus). Total bicycle volumes approaching the intersection from each direction are available. Total pedestrian volumes crossing each leg of the intersection are available. Raw data are recorded and aggregated into 15-minute intervals. The following files showing different views of the data are available: Data Dictionary (tmc_data_dictionary.xlsx): Provides a detailed definition of every data field in all files. Summary Data (tmc_summary_data): Provides metadata about every TMC available, including information about the count location and count date, as well as summary data about each count (total 8- or 14-hour pedestrian volumes, total 8- or 14-hour vehicle and bicycle volumes for each approach to the intersection, percent of total that are heavy vehicles and a.m. and p.m. peak hour vehicle and bicycle volumes). Most Recent Count Data (tmc_most_recent_summary_data): Provides metadata about the most recent TMC available at each location for which a TMC exists, including information about the count location and count date, as well as the summary data provided in the “Summary Data” file (see above). Raw Data (tmc_raw_data_yyyy_yyyy): These files—grouped by 5-10 year interval—provide count volumes for cars, trucks, buses, cyclists and pedestrians in 15-minute intervals, for movements through the intersection, for every TMC available. Vehicle volumes are broken down by movement through the intersection (left-turn, right-turn and through-movement, for each approach), cyclist volumes are broken down by leg they enter the intersection and pedestrian volumes are broken down by the leg of the intersection they are counted crossing. This dataset references the City of Toronto's Street Centreline dataset, Intersection File dataset and Street Traffic Signal dataset.
This dataset shows the distribution of fishing effort by fishing vessels according to the gear type used. Fishing effort is defined as the time spent engaged in fishing operations or time spent at sea, this time may be multiplied by a measure of fishing capacity, e.g. engine power. In this dataset fishing effort is measured as average hours spent actively fishing per kilometre square, per year. Data from years 2014 to 2018 was used to produce this data product for the Marine Institute publication the “Atlas of Commercial Fisheries around Ireland, third edition“ (https://oar.marine.ie/handle/10793/1432). Effort for offshore fisheries is based on the following 2 primary data types - data on vessel positioning and data on gear types used: Vessel Monitoring Systems (VMS) supplied by the Irish Naval Service provide geographical position and speed of vessel at intervals of two hours or less (Commission Regulation (EC) No. 2244/2003). The data are available for all EU vessels of 12m and larger, operating inside the Irish EEZ; outside this zone only Irish VMS data are routinely available. VMS do not record whether a vessel is fishing, steaming or inactive. Logbooks collected by the Sea-Fisheries Protection Authority and supplied by the Department of Agriculture, Food and the Marine were the primary data source for information on landings and gear types used by Irish vessels. EU Fleet Register obtained from the EU fleet register (http://ec.europa.eu/fisheries/fleet/index.cfm) provides information for non-Irish vessels and for Irish vessels for which the gear was not known from the logbooks. Note that if vessels use more than one gear, it is possible that the gear type assigned to them was not the one that was actually used. The fishing gear data was classified into eight main groups: demersal otter trawls; beam trawls; demersal seines; gill and trammel nets; longlines; dredges; pots and pelagic trawls. The VMS data was analysed using the approach described by Gerritsen and Lordan (IJMS 68(1)). This approach assigns effort to each of the VMS data points. The effort of a VMS data point is defined as the time interval since the previous data point. Next the data are filtered for fishing activity using speed criteria, vessels were assumed to be actively fishing if their speed fell within a certain range (depending on the fishing gear used). The points that remain are then aggregated into a spatial grid to produce a raster dataset showing fishing effort (in hours) per kilometre square per year for each gear type group. The data is available for all countries combined and for Irish vessels only. None
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Employment, Commuting, Occupation, Income, Health Insurance, Poverty, and more. This service is updated annually with American Community Survey (ACS) 5-year data. Contact: District of Columbia, Office of Planning. Email: planning@dc.gov. Geography: Census Tracts. Current Vintage: 2019-2023. ACS Table(s): DP03. Data downloaded from: Census Bureau's API for American Community Survey. Date of API call: January 2, 2025. National Figures: data.census.gov. Please cite the Census and ACS when using this data. Data Note from the Census: Data are based on a sample and are subject to sampling variability. The degree of uncertainty for an estimate arising from sampling variability is represented through the use of a margin of error. The value shown here is the 90 percent margin of error. The margin of error can be interpreted as providing a 90 percent probability that the interval defined by the estimate minus the margin of error and the estimate plus the margin of error (the lower and upper confidence bounds) contains the true value. In addition to sampling variability, the ACS estimates are subject to nonsampling error (for a discussion of nonsampling variability, see Accuracy of the Data). The effect of nonsampling error is not represented in these tables. Data Processing Notes: This layer is updated automatically when the most current vintage of ACS data is released each year, usually in December. The layer always contains the latest available ACS 5-year estimates. It is updated annually within days of the Census Bureau's release schedule. Boundaries come from the US Census TIGER geodatabases. Boundaries are updated at the same time as the data updates (annually), and the boundary vintage appropriately matches the data vintage as specified by the Census. These are Census boundaries with water and/or coastlines clipped for cartographic purposes. For census tracts, the water cutouts are derived from a subset of the 2020 AWATER (Area Water) boundaries offered by TIGER. For state and county boundaries, the water and coastlines are derived from the coastlines of the 500k TIGER Cartographic Boundary Shapefiles. The original AWATER and ALAND fields are still available as attributes within the data table (units are square meters). Field alias names were created based on the Table Shells file available from the American Community Survey Summary File Documentation page. Data processed using R statistical package and ArcGIS Desktop. Margin of Error was not included in this layer but is available from the Census Bureau. Contact the Office of Planning for more information about obtaining Margin of Error values.
This layer shows computer ownership and internet access by education. This is shown by tract, county, and state boundaries. This service is updated annually to contain the most currently released American Community Survey (ACS) 5-year data, and contains estimates and margins of error. There are also additional calculated attributes related to this topic, which can be mapped or used within analysis. This layer is symbolized to show the percent of the population age 25+ who are high school graduates (includes equivalency) and have some college or associate's degree in households that have no computer. To see the full list of attributes available in this service, go to the "Data" tab, and choose "Fields" at the top right. Current Vintage: 2019-2023ACS Table(s): B28006 Data downloaded from: Census Bureau's API for American Community Survey Date of API call: December 12, 2024National Figures: data.census.govThe United States Census Bureau's American Community Survey (ACS):About the SurveyGeography & ACSTechnical DocumentationNews & UpdatesThis ready-to-use layer can be used within ArcGIS Pro, ArcGIS Online, its configurable apps, dashboards, Story Maps, custom apps, and mobile apps. Data can also be exported for offline workflows. For more information about ACS layers, visit the FAQ. Please cite the Census and ACS when using this data.Data Note from the Census:Data are based on a sample and are subject to sampling variability. The degree of uncertainty for an estimate arising from sampling variability is represented through the use of a margin of error. The value shown here is the 90 percent margin of error. The margin of error can be interpreted as providing a 90 percent probability that the interval defined by the estimate minus the margin of error and the estimate plus the margin of error (the lower and upper confidence bounds) contains the true value. In addition to sampling variability, the ACS estimates are subject to nonsampling error (for a discussion of nonsampling variability, see Accuracy of the Data). The effect of nonsampling error is not represented in these tables.Data Processing Notes:This layer is updated automatically when the most current vintage of ACS data is released each year, usually in December. The layer always contains the latest available ACS 5-year estimates. It is updated annually within days of the Census Bureau's release schedule. Click here to learn more about ACS data releases.Boundaries come from the US Census TIGER geodatabases, specifically, the National Sub-State Geography Database (named tlgdb_(year)_a_us_substategeo.gdb). Boundaries are updated at the same time as the data updates (annually), and the boundary vintage appropriately matches the data vintage as specified by the Census. These are Census boundaries with water and/or coastlines erased for cartographic and mapping purposes. For census tracts, the water cutouts are derived from a subset of the 2020 Areal Hydrography boundaries offered by TIGER. Water bodies and rivers which are 50 million square meters or larger (mid to large sized water bodies) are erased from the tract level boundaries, as well as additional important features. For state and county boundaries, the water and coastlines are derived from the coastlines of the 2023 500k TIGER Cartographic Boundary Shapefiles. These are erased to more accurately portray the coastlines and Great Lakes. The original AWATER and ALAND fields are still available as attributes within the data table (units are square meters).The States layer contains 52 records - all US states, Washington D.C., and Puerto RicoCensus tracts with no population that occur in areas of water, such as oceans, are removed from this data service (Census Tracts beginning with 99).Percentages and derived counts, and associated margins of error, are calculated values (that can be identified by the "_calc_" stub in the field name), and abide by the specifications defined by the American Community Survey.Field alias names were created based on the Table Shells file available from the American Community Survey Summary File Documentation page.Negative values (e.g., -4444...) have been set to null, with the exception of -5555... which has been set to zero. These negative values exist in the raw API data to indicate the following situations:The margin of error column indicates that either no sample observations or too few sample observations were available to compute a standard error and thus the margin of error. A statistical test is not appropriate.Either no sample observations or too few sample observations were available to compute an estimate, or a ratio of medians cannot be calculated because one or both of the median estimates falls in the lowest interval or upper interval of an open-ended distribution.The median falls in the lowest interval of an open-ended distribution, or in the upper interval of an open-ended distribution. A statistical test is not appropriate.The estimate is controlled. A statistical test for sampling variability is not appropriate.The data for this geographic area cannot be displayed because the number of sample cases is too small.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset SpecificationThe Data folder contains six subfolders (2D X-ray images, Experiment 1, Experiment 2, Experiment 3, Lunar orbit and OMNI dataset) and an instruction file called Readme.txt.Below I will explain the meaning of the data contained in each sub-file.OMNI dataset folder:The solar wind parameters observed from May 14, 2009, 00:00:00 UTC, to May 15, 2009, 22:40:00 UTC. From top to bottom, the panels display solar wind proton number density, solar wind velocities in three directions (Vx , Vy, and Vz), interplanetary magnetic field strength in three directions (Bx , By, and Bz ), and temperature (T).Lunar orbit folder:Depicts the lunar trajectory within the time interval from May 14, 2009, 00:00:00 UTC, to May 15, 2009, 22:40:00 UTC, with a sampling interval of 1 minute and an angular separation of 24.2222° between the start and end points. Experiment 1, Experiment 2 and Experiment 3 folder: Experiment 1 respectively illustrate the PSNR and SSIM calculated between the linear interpolation method and the adaptive dynamic X-ray image estimation method against the MHD model X-ray images . Experiment 2 conducted a detailed analysis of the results obtained with time intervals of 5 minutes, 10 minutes, 15 minutes, and 20 minutes for both linear interpolation and adaptive X-ray image estimation methods. Experiment 3 illustrates the continuous evolution of maxima and bow shock profiles in the original MHD X-ray images, linear interpolation results, and adaptive estimation results over the interval from May 14, 2009, 00:10:00 UTC, to May 16, 2009, 10:30:00 UTC, and the time period T of linear interpolation and adaptive dynamic X-ray estimation methods is set to 5 minutes.2D X-ray images folder:The folder contains the results of 2780 minutes of dynamic 2-D X-ray image estimation, and the algorithm's estimation interval is set to 5 minutes.This dataset can be used for related studies using 2-D X-ray estimates of Earth dynamics.The latest version adds Super-SloMo code and corresponding comparison results, and places code to calculate ssim and psnr. All the calculation results of SSIM and PSNR in this paper are completed after normalization.
Performance Measure Definition: Average Call Processing Interval